Statement during Migration of a VM from one PVE-Node to another

Hi all,

I'm new to Proxmox

Today we updated our three proxmox servers to PVE Version 8.4.1

During the update, because there was a kernel update available, wie set one PVE Node to maintenance mode.
During the monitoring of the migration process we notice the following to lines for each VM target to be migrated.

....
Use of uninitialized value in string ne at /usr/share/perl5/PVE/QemuMigrate.pm line 1324.
Use of uninitialized value $last_vfio_transferred in string ne at /usr/share/perl5/PVE/QemuMigrate.pm line 1324.
...

Don't know if this is only an Information, a warning or critical notice.

Unfortunatly Migration of a VM from one node to another takes ages than seconds.

I set migrate: insecure in /etc/pve/datacenter.cfg to speed up migration but with no noticable effect.
All my three nodes have an uplink with a Dual 10G Ethernet Adapter and are connected to the same Switch

Any hints to look at for isolating a potential bottleneck?
 
I surmise that you are encountering an IO bottleneck.
If you can describe your infrastructure, particularly the storage aspect. It would be very helpful.
 
Im using ceph storage as primary VM-Data store
three host with four disk NVME 2TB at each host.

Configuration:
[global] auth_client_required = cephx auth_cluster_required = cephx auth_service_required = cephx cluster_network = 10.137.1.0/24 fsid = e514f756-b1ce-4429-aa96-9304de459fd1 mon_allow_pool_delete = true mon_host = 10.137.1.20 10.137.1.30 10.137.1.10 ms_bind_ipv4 = true ms_bind_ipv6 = false osd_pool_default_min_size = 2 osd_pool_default_size = 3 public_network = 10.137.1.0/24[client] keyring = /etc/pve/priv/$cluster.$name.keyring[client.crash] keyring = /etc/pve/ceph/$cluster.$name.keyring[mon.prx01] public_addr = 10.137.1.10[mon.prx02] public_addr = 10.137.1.20[mon.prx03] public_addr = 10.137.1.30

Network:
Linux bond1 - enp67s0f2np2 enp67s0f3np3 ens1f0np0 ens1f1np1 - LACP (802.3ad) - MTU 1500
 
The two Perl warnings are harmless debug‑level notices emitted when no VFIO‑passthrough data exist; they do not by themselves indicate a failed migration or performance issue.
Live‑migration of a Ceph‑backed VM on shared‑storage only transfers RAM pages over the network, so throughput is bound by your network and Ceph client I/O performance.
You can separate Ceph public vs. Ceph cluster vs. migration networks (e.g. via VLANs or dedicated interfaces) and enabling jumbo‑frames to improve throughput.
 
  • Like
Reactions: Johannes S
I've run into this as well today migrating between two 8.4.1 clusters with CEPH (lots of enterprise SSDs and NVMe), dedicated 10 gig CEPH, Cluster, and User space LAGs, with only a single VM out of over 125 migrated so far this week having this issue with the exact same process on the exact same environment. I retried the same VM after rebooting it and got the same result. The transfer of every other VM, some with substantially more RAM than this, all transfer state very quickly, multiple GiB/s. This one transfers the disks quickly then gets to VM State and crawls at a few MiB/s:


Use of uninitialized value $last_vfio_transferred in string ne at /usr/share/perl5/PVE/QemuMigrate.pm line 1324.
2025-05-18 11:46:20 migration active, transferred 9.8 GiB of 48.0 GiB VM-state, 4.4 MiB/s
Use of uninitialized value in string ne at /usr/share/perl5/PVE/QemuMigrate.pm line 1324.
Use of uninitialized value $last_vfio_transferred in string ne at /usr/share/perl5/PVE/QemuMigrate.pm line 1324.
2025-05-18 11:46:22 migration active, transferred 9.8 GiB of 48.0 GiB VM-state, 3.2 MiB/s
Use of uninitialized value in string ne at /usr/share/perl5/PVE/QemuMigrate.pm line 1324.
Use of uninitialized value $last_vfio_transferred in string ne at /usr/share/perl5/PVE/QemuMigrate.pm line 1324.
2025-05-18 11:46:24 migration active, transferred 9.8 GiB of 48.0 GiB VM-state, 5.4 MiB/s
2025-05-18 11:46:25 migration active, transferred 9.8 GiB of 48.0 GiB VM-state, 6.1 MiB/s
Use of uninitialized value in string ne at /usr/share/perl5/PVE/QemuMigrate.pm line 1324.
Use of uninitialized value $last_vfio_transferred in string ne at /usr/share/perl5/PVE/QemuMigrate.pm line 1324.
2025-05-18 11:46:27 migration active, transferred 9.8 GiB of 48.0 GiB VM-state, 6.4 MiB/s
2025-05-18 11:46:28 migration active, transferred 9.8 GiB of 48.0 GiB VM-state, 4.8 MiB/s
Use of uninitialized value in string ne at /usr/share/perl5/PVE/QemuMigrate.pm line 1324.
Use of uninitialized value $last_vfio_transferred in string ne at /usr/share/perl5/PVE/QemuMigrate.pm line 1324.
2025-05-18 11:46:30 migration active, transferred 9.8 GiB of 48.0 GiB VM-state, 5.5 MiB/s
2025-05-18 11:46:31 migration active, transferred 9.8 GiB of 48.0 GiB VM-state, 6.6 MiB/s
Use of uninitialized value in string ne at /usr/share/perl5/PVE/QemuMigrate.pm line 1324.
Use of uninitialized value $last_vfio_transferred in string ne at /usr/share/perl5/PVE/QemuMigrate.pm line 1324.
2025-05-18 11:46:33 migration active, transferred 9.8 GiB of 48.0 GiB VM-state, 4.8 MiB/s

The node I was transferring to has 512GB of RAM and less than 200GB used. None the less, I migrated a single VM off of it to see if freeing up a little RAM would help. As soon as the VMw as moved, the transfer kicked into high speed and rapidly finished:

2025-05-18 11:49:46 migration active, transferred 13.2 GiB of 48.0 GiB VM-state, 412.9 MiB/s
2025-05-18 11:49:47 migration active, transferred 13.6 GiB of 48.0 GiB VM-state, 290.2 MiB/s
2025-05-18 11:49:48 migration active, transferred 13.9 GiB of 48.0 GiB VM-state, 331.4 MiB/s
2025-05-18 11:49:49 migration active, transferred 14.3 GiB of 48.0 GiB VM-state, 269.8 MiB/s
2025-05-18 11:49:50 migration active, transferred 14.6 GiB of 48.0 GiB VM-state, 374.0 MiB/s
2025-05-18 11:49:51 migration active, transferred 14.9 GiB of 48.0 GiB VM-state, 301.2 MiB/s
2025-05-18 11:49:52 migration active, transferred 15.2 GiB of 48.0 GiB VM-state, 345.4 MiB/s
2025-05-18 11:49:53 migration active, transferred 15.5 GiB of 48.0 GiB VM-state, 303.5 MiB/s
2025-05-18 11:49:54 migration active, transferred 15.8 GiB of 48.0 GiB VM-state, 312.6 MiB/s
2025-05-18 11:49:55 migration active, transferred 16.1 GiB of 48.0 GiB VM-state, 330.7 MiB/s
2025-05-18 11:49:56 migration active, transferred 16.5 GiB of 48.0 GiB VM-state, 347.9 MiB/s
2025-05-18 11:49:57 migration active, transferred 16.8 GiB of 48.0 GiB VM-state, 410.4 MiB/s
2025-05-18 11:49:58 migration active, transferred 17.1 GiB of 48.0 GiB VM-state, 367.7 MiB/s
2025-05-18 11:49:59 migration active, transferred 17.4 GiB of 48.0 GiB VM-state, 328.6 MiB/s
2025-05-18 11:50:00 migration active, transferred 17.7 GiB of 48.0 GiB VM-state, 290.2 MiB/s
2025-05-18 11:50:01 migration active, transferred 18.1 GiB of 48.0 GiB VM-state, 282.7 MiB/s
2025-05-18 11:50:02 migration active, transferred 18.4 GiB of 48.0 GiB VM-state, 319.0 MiB/s
2025-05-18 11:50:03 migration active, transferred 18.6 GiB of 48.0 GiB VM-state, 352.6 MiB/s
2025-05-18 11:50:04 migration active, transferred 18.9 GiB of 48.0 GiB VM-state, 322.9 MiB/s
2025-05-18 11:50:05 migration active, transferred 19.3 GiB of 48.0 GiB VM-state, 391.6 MiB/s
2025-05-18 11:50:06 migration active, transferred 19.6 GiB of 48.0 GiB VM-state, 308.7 MiB/s
2025-05-18 11:50:07 migration active, transferred 19.8 GiB of 48.0 GiB VM-state, 262.1 MiB/s
2025-05-18 11:50:08 migration active, transferred 20.2 GiB of 48.0 GiB VM-state, 332.7 MiB/s
2025-05-18 11:50:09 migration active, transferred 20.5 GiB of 48.0 GiB VM-state, 317.9 MiB/s
2025-05-18 11:50:10 migration active, transferred 20.8 GiB of 48.0 GiB VM-state, 313.5 MiB/s
2025-05-18 11:50:11 migration active, transferred 21.1 GiB of 48.0 GiB VM-state, 294.8 MiB/s
2025-05-18 11:50:12 migration active, transferred 21.5 GiB of 48.0 GiB VM-state, 404.4 MiB/s
2025-05-18 11:50:13 migration active, transferred 21.5 GiB of 48.0 GiB VM-state, 6.9 GiB/s
2025-05-18 11:50:14 migration active, transferred 21.5 GiB of 48.0 GiB VM-state, 4.9 GiB/s
2025-05-18 11:50:15 migration active, transferred 21.5 GiB of 48.0 GiB VM-state, 4.8 GiB/s
2025-05-18 11:50:16 migration active, transferred 21.7 GiB of 48.0 GiB VM-state, 2.3 GiB/s
2025-05-18 11:50:17 migration active, transferred 21.8 GiB of 48.0 GiB VM-state, 332.0 MiB/s
2025-05-18 11:50:18 migration active, transferred 22.1 GiB of 48.0 GiB VM-state, 292.4 MiB/s
2025-05-18 11:50:19 migration active, transferred 22.4 GiB of 48.0 GiB VM-state, 340.8 MiB/s
2025-05-18 11:50:20 migration active, transferred 22.7 GiB of 48.0 GiB VM-state, 306.7 MiB/s
2025-05-18 11:50:21 migration active, transferred 23.0 GiB of 48.0 GiB VM-state, 280.4 MiB/s
tunnel: done handling forwarded connection from '/run/qemu-server/186.migrate'
2025-05-18 11:50:22 average migration speed: 86.7 MiB/s - downtime 154 ms
2025-05-18 11:50:22 migration completed, transferred 23.0 GiB VM-state


Maybe I have some bad RAM? Haven't had any issues with the node, so this is puzzling. Regardless, try freeing up some RAM on your target node to see if it makes a difference.