I've run into this as well today migrating between two 8.4.1 clusters with CEPH (lots of enterprise SSDs and NVMe), dedicated 10 gig CEPH, Cluster, and User space LAGs, with only a single VM out of over 125 migrated so far this week having this issue with the exact same process on the exact same environment. I retried the same VM after rebooting it and got the same result. The transfer of every other VM, some with substantially more RAM than this, all transfer state very quickly, multiple GiB/s. This one transfers the disks quickly then gets to VM State and crawls at a few MiB/s:
Use of uninitialized value $last_vfio_transferred in string ne at /usr/share/perl5/PVE/QemuMigrate.pm line 1324.
2025-05-18 11:46:20 migration active, transferred 9.8 GiB of 48.0 GiB VM-state, 4.4 MiB/s
Use of uninitialized value in string ne at /usr/share/perl5/PVE/QemuMigrate.pm line 1324.
Use of uninitialized value $last_vfio_transferred in string ne at /usr/share/perl5/PVE/QemuMigrate.pm line 1324.
2025-05-18 11:46:22 migration active, transferred 9.8 GiB of 48.0 GiB VM-state, 3.2 MiB/s
Use of uninitialized value in string ne at /usr/share/perl5/PVE/QemuMigrate.pm line 1324.
Use of uninitialized value $last_vfio_transferred in string ne at /usr/share/perl5/PVE/QemuMigrate.pm line 1324.
2025-05-18 11:46:24 migration active, transferred 9.8 GiB of 48.0 GiB VM-state, 5.4 MiB/s
2025-05-18 11:46:25 migration active, transferred 9.8 GiB of 48.0 GiB VM-state, 6.1 MiB/s
Use of uninitialized value in string ne at /usr/share/perl5/PVE/QemuMigrate.pm line 1324.
Use of uninitialized value $last_vfio_transferred in string ne at /usr/share/perl5/PVE/QemuMigrate.pm line 1324.
2025-05-18 11:46:27 migration active, transferred 9.8 GiB of 48.0 GiB VM-state, 6.4 MiB/s
2025-05-18 11:46:28 migration active, transferred 9.8 GiB of 48.0 GiB VM-state, 4.8 MiB/s
Use of uninitialized value in string ne at /usr/share/perl5/PVE/QemuMigrate.pm line 1324.
Use of uninitialized value $last_vfio_transferred in string ne at /usr/share/perl5/PVE/QemuMigrate.pm line 1324.
2025-05-18 11:46:30 migration active, transferred 9.8 GiB of 48.0 GiB VM-state, 5.5 MiB/s
2025-05-18 11:46:31 migration active, transferred 9.8 GiB of 48.0 GiB VM-state, 6.6 MiB/s
Use of uninitialized value in string ne at /usr/share/perl5/PVE/QemuMigrate.pm line 1324.
Use of uninitialized value $last_vfio_transferred in string ne at /usr/share/perl5/PVE/QemuMigrate.pm line 1324.
2025-05-18 11:46:33 migration active, transferred 9.8 GiB of 48.0 GiB VM-state, 4.8 MiB/s
The node I was transferring to has 512GB of RAM and less than 200GB used. None the less, I migrated a single VM off of it to see if freeing up a little RAM would help. As soon as the VMw as moved, the transfer kicked into high speed and rapidly finished:
2025-05-18 11:49:46 migration active, transferred 13.2 GiB of 48.0 GiB VM-state, 412.9 MiB/s
2025-05-18 11:49:47 migration active, transferred 13.6 GiB of 48.0 GiB VM-state, 290.2 MiB/s
2025-05-18 11:49:48 migration active, transferred 13.9 GiB of 48.0 GiB VM-state, 331.4 MiB/s
2025-05-18 11:49:49 migration active, transferred 14.3 GiB of 48.0 GiB VM-state, 269.8 MiB/s
2025-05-18 11:49:50 migration active, transferred 14.6 GiB of 48.0 GiB VM-state, 374.0 MiB/s
2025-05-18 11:49:51 migration active, transferred 14.9 GiB of 48.0 GiB VM-state, 301.2 MiB/s
2025-05-18 11:49:52 migration active, transferred 15.2 GiB of 48.0 GiB VM-state, 345.4 MiB/s
2025-05-18 11:49:53 migration active, transferred 15.5 GiB of 48.0 GiB VM-state, 303.5 MiB/s
2025-05-18 11:49:54 migration active, transferred 15.8 GiB of 48.0 GiB VM-state, 312.6 MiB/s
2025-05-18 11:49:55 migration active, transferred 16.1 GiB of 48.0 GiB VM-state, 330.7 MiB/s
2025-05-18 11:49:56 migration active, transferred 16.5 GiB of 48.0 GiB VM-state, 347.9 MiB/s
2025-05-18 11:49:57 migration active, transferred 16.8 GiB of 48.0 GiB VM-state, 410.4 MiB/s
2025-05-18 11:49:58 migration active, transferred 17.1 GiB of 48.0 GiB VM-state, 367.7 MiB/s
2025-05-18 11:49:59 migration active, transferred 17.4 GiB of 48.0 GiB VM-state, 328.6 MiB/s
2025-05-18 11:50:00 migration active, transferred 17.7 GiB of 48.0 GiB VM-state, 290.2 MiB/s
2025-05-18 11:50:01 migration active, transferred 18.1 GiB of 48.0 GiB VM-state, 282.7 MiB/s
2025-05-18 11:50:02 migration active, transferred 18.4 GiB of 48.0 GiB VM-state, 319.0 MiB/s
2025-05-18 11:50:03 migration active, transferred 18.6 GiB of 48.0 GiB VM-state, 352.6 MiB/s
2025-05-18 11:50:04 migration active, transferred 18.9 GiB of 48.0 GiB VM-state, 322.9 MiB/s
2025-05-18 11:50:05 migration active, transferred 19.3 GiB of 48.0 GiB VM-state, 391.6 MiB/s
2025-05-18 11:50:06 migration active, transferred 19.6 GiB of 48.0 GiB VM-state, 308.7 MiB/s
2025-05-18 11:50:07 migration active, transferred 19.8 GiB of 48.0 GiB VM-state, 262.1 MiB/s
2025-05-18 11:50:08 migration active, transferred 20.2 GiB of 48.0 GiB VM-state, 332.7 MiB/s
2025-05-18 11:50:09 migration active, transferred 20.5 GiB of 48.0 GiB VM-state, 317.9 MiB/s
2025-05-18 11:50:10 migration active, transferred 20.8 GiB of 48.0 GiB VM-state, 313.5 MiB/s
2025-05-18 11:50:11 migration active, transferred 21.1 GiB of 48.0 GiB VM-state, 294.8 MiB/s
2025-05-18 11:50:12 migration active, transferred 21.5 GiB of 48.0 GiB VM-state, 404.4 MiB/s
2025-05-18 11:50:13 migration active, transferred 21.5 GiB of 48.0 GiB VM-state, 6.9 GiB/s
2025-05-18 11:50:14 migration active, transferred 21.5 GiB of 48.0 GiB VM-state, 4.9 GiB/s
2025-05-18 11:50:15 migration active, transferred 21.5 GiB of 48.0 GiB VM-state, 4.8 GiB/s
2025-05-18 11:50:16 migration active, transferred 21.7 GiB of 48.0 GiB VM-state, 2.3 GiB/s
2025-05-18 11:50:17 migration active, transferred 21.8 GiB of 48.0 GiB VM-state, 332.0 MiB/s
2025-05-18 11:50:18 migration active, transferred 22.1 GiB of 48.0 GiB VM-state, 292.4 MiB/s
2025-05-18 11:50:19 migration active, transferred 22.4 GiB of 48.0 GiB VM-state, 340.8 MiB/s
2025-05-18 11:50:20 migration active, transferred 22.7 GiB of 48.0 GiB VM-state, 306.7 MiB/s
2025-05-18 11:50:21 migration active, transferred 23.0 GiB of 48.0 GiB VM-state, 280.4 MiB/s
tunnel: done handling forwarded connection from '/run/qemu-server/186.migrate'
2025-05-18 11:50:22 average migration speed: 86.7 MiB/s - downtime 154 ms
2025-05-18 11:50:22 migration completed, transferred 23.0 GiB VM-state
Maybe I have some bad RAM? Haven't had any issues with the node, so this is puzzling. Regardless, try freeing up some RAM on your target node to see if it makes a difference.