[SOLVED] Problem migrating VMs in a Proxmox 9 cluster with VLAN on vmbr0.

Tacioandrade

Renowned Member
Sep 14, 2012
134
21
83
Vitória da Conquista, Brazil
Hello everyone, I have a Directory-type storage cluster with 3 nodes that came from Proxmox VE 7 and are now on version 9.

I just added a new node that we call pve03 and we are migrating the VMs back to this host after a format. However, during the installation, our analyst left the option to name the network cards as nic1, nic2, etc. checked.

The problem we are having is that when we try to migrate a VM that uses vmbr0, but with a VLAN set on it, it gives an error during migration, stating that there is no bridge for the vmbr0 network card.

Code:
2025-12-04 21:35:18 starting migration of VM 111 to node 'pve03' (192.168.25.205)
2025-12-04 21:35:18 found local disk 'local-ssd02:111/vm-111-disk-0.qcow2' (attached)
2025-12-04 21:35:18 starting VM 111 on remote node 'pve03'
2025-12-04 21:35:21 [pve03] no physical interface on bridge 'vmbr0'
2025-12-04 21:35:21 [pve03] kvm: -netdev type=tap,id=net0,ifname=tap111i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown,vhost=on: network script /usr/libexec/qemu-server/pve-bridge failed with status 6400
2025-12-04 21:35:21 [pve03] start failed: QEMU exited with code 1
2025-12-04 21:35:21 ERROR: online migrate failure - remote command failed with exit code 255
2025-12-04 21:35:21 aborting phase 2 - cleanup resources
2025-12-04 21:35:21 migrate_cancel
2025-12-04 21:35:22 ERROR: migration finished with problems (duration 00:00:06)
TASK ERROR: migration problems

When I migrate the same VM to another host that has the old name eno1, it works perfectly.

I would like to know if anyone else is having this problem.

Here is the pve version:
pveversion -v
proxmox-ve: 9.1.0 (running kernel: 6.17.2-1-pve)
pve-manager: 9.1.2 (running version: 9.1.2/9d436f37a0ac4172)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-6.17.2-2-pve-signed: 6.17.2-2
proxmox-kernel-6.17: 6.17.2-2
proxmox-kernel-6.17.2-1-pve-signed: 6.17.2-1
ceph-fuse: 19.2.3-pve2
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.4.1-1+pve1
ifupdown2: 3.3.0-1+pmx11
intel-microcode: 3.20250812.1~deb13u1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.4
libpve-apiclient-perl: 3.4.2
libpve-cluster-api-perl: 9.0.7
libpve-cluster-perl: 9.0.7
libpve-common-perl: 9.1.0
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.5
libpve-network-perl: 1.2.3
libpve-rs-perl: 0.11.3
libpve-storage-perl: 9.1.0
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.5-3
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.1.0-1
proxmox-backup-file-restore: 4.1.0-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.2.1
proxmox-kernel-helper: 9.0.4
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.3
proxmox-widget-toolkit: 5.1.2
pve-cluster: 9.0.7
pve-container: 6.0.18
pve-docs: 9.1.1
pve-edk2-firmware: 4.2025.05-2
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.4
pve-firmware: 3.17-2
pve-ha-manager: 5.0.8
pve-i18n: 3.6.5
pve-qemu-kvm: 10.1.2-4
pve-xtermjs: 5.5.0-3
qemu-server: 9.1.1
smartmontools: 7.4-pve1
spiceterm: 3.4.1
swtpm: 0.8.0+pve3
vncterm: 1.9.1
zfsutils-linux: 2.3.4-pve1
 
Last edited:
Have you checked the IP configuration of the hypervisor? Does vmbr0 indeed not have an interface attached?
Compare the output of "ip a" and "cat /etc/network/interfaces" across the nodes.

The error says that vmbr0 (which is a bridge) does not have an interface attached.


good luck

Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Have you checked the IP configuration of the hypervisor? Does vmbr0 indeed not have an interface attached?
Compare the output of "ip a" and "cat /etc/network/interfaces" across the nodes.

The error says that vmbr0 (which is a bridge) does not have an interface attached.


good luck

Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Yes, the Proxmox addressing is correct; it's exactly the same as other Proxmox devices. The only difference is that the network card is set to nic1.

I only discovered this problem when I tried to migrate a VM with three network cards: one on the management VLAN and the other two on VLANs 40 and 160.

When I remove the VLAN from the network card, I can migrate the VM to pve03.

Another thing I discovered is that I can't assign a VLAN to any network card on pve03; it always returns the following error when initializing it:
"no physical interface on bridge 'vmbr0'"

However, when the VM doesn't have any VLANs on vmbr0, it works perfectly.

Code:
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: nic0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether b8:ca:3a:f7:15:cf brd ff:ff:ff:ff:ff:ff
    altname enp1s0f0
    altname enxb8ca3af715cf
3: nic1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether b8:ca:3a:f7:15:d0 brd ff:ff:ff:ff:ff:ff
    altname enp1s0f1
    altname enxb8ca3af715d0
4: nic2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether b8:ca:3a:f7:15:d1 brd ff:ff:ff:ff:ff:ff
    altname enp2s0f0
    altname enxb8ca3af715d1
5: nic3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether b8:ca:3a:f7:15:d2 brd ff:ff:ff:ff:ff:ff
    altname enp2s0f1
    altname enxb8ca3af715d2
6: nic4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether f4:e9:d4:af:d1:80 brd ff:ff:ff:ff:ff:ff
    altname enxf4e9d4afd180
7: nic5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether f4:e9:d4:af:d1:82 brd ff:ff:ff:ff:ff:ff
    altname enxf4e9d4afd182
8: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether b8:ca:3a:f7:15:cf brd ff:ff:ff:ff:ff:ff
    inet 192.168.25.205/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::baca:3aff:fef7:15cf/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever

Code:
cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface nic0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.25.205/24
        gateway 192.168.25.1
        bridge-ports nic0
        bridge-stp off
        bridge-fd 0
 
I just reformatted the host and unchecked the Network PIN option, and after that everything worked perfectly.

I can't say for sure if the Network PIN is the problem, but I've been working with PVE since 2014 and had analyzed all the other possible causes, and the only one left, from my perspective, was the Network PIN.
 
I had a "bright" idea to take a look at the code and the cause seems "simple" enough:
https://github.com/proxmox/pve-common/blob/master/src/PVE/Network.pm#L691

Code:
    my @ifaces = ();
    my $dir = "/sys/class/net/$bridge/brif";
    PVE::Tools::dir_glob_foreach(
        $dir,
        '(((eth|bond)\d+|en[^.]+)(\.\d+)?)',
        sub {
            push @ifaces, $_[0];
        },
    );
    
    die "no physical interface on bridge '$bridge'\n" if scalar(@ifaces) == 0;

It simply only accepts interfaces that start with "eth", bond" and "en". So definitely a good call to open a bug report.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Tacioandrade
I had a "bright" idea to take a look at the code and the cause seems "simple" enough:
https://github.com/proxmox/pve-common/blob/master/src/PVE/Network.pm#L691

Code:
    my @ifaces = ();
    my $dir = "/sys/class/net/$bridge/brif";
    PVE::Tools::dir_glob_foreach(
        $dir,
        '(((eth|bond)\d+|en[^.]+)(\.\d+)?)',
        sub {
            push @ifaces, $_[0];
        },
    );
   
    die "no physical interface on bridge '$bridge'\n" if scalar(@ifaces) == 0;

It simply only accepts interfaces that start with "eth", bond" and "en". So definitely a good call to open a bug report.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Thank you very much!!! Hopefully the Proxmox team will notice and correct this as soon as possible.

However, for now I have been advising all partners to uncheck this option, at least for the time being, in potential installations to avoid problems.
 
  • Like
Reactions: bbgeek17
In my home lab, I renamed the network interfaces using:

Code:
pve-network-interface-pinning generate --prefix eth

After that, I deleted the originally generated files from
Code:
/usr/local/lib/systemd/network/
(in my case the file was named, for example, 50-pmx-nic0.link).

After a reboot, the system came back online and the server is now running with the new eth0–ethX interfaces.
 
Yes, the Proxmox addressing is correct; it's exactly the same as other Proxmox devices. The only difference is that the network card is set to nic1.

I only discovered this problem when I tried to migrate a VM with three network cards: one on the management VLAN and the other two on VLANs 40 and 160.

When I remove the VLAN from the network card, I can migrate the VM to pve03.

Another thing I discovered is that I can't assign a VLAN to any network card on pve03; it always returns the following error when initializing it:
"no physical interface on bridge 'vmbr0'"

However, when the VM doesn't have any VLANs on vmbr0, it works perfectly.

Code:
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: nic0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether b8:ca:3a:f7:15:cf brd ff:ff:ff:ff:ff:ff
    altname enp1s0f0
    altname enxb8ca3af715cf
3: nic1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether b8:ca:3a:f7:15:d0 brd ff:ff:ff:ff:ff:ff
    altname enp1s0f1
    altname enxb8ca3af715d0
4: nic2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether b8:ca:3a:f7:15:d1 brd ff:ff:ff:ff:ff:ff
    altname enp2s0f0
    altname enxb8ca3af715d1
5: nic3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether b8:ca:3a:f7:15:d2 brd ff:ff:ff:ff:ff:ff
    altname enp2s0f1
    altname enxb8ca3af715d2
6: nic4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether f4:e9:d4:af:d1:80 brd ff:ff:ff:ff:ff:ff
    altname enxf4e9d4afd180
7: nic5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether f4:e9:d4:af:d1:82 brd ff:ff:ff:ff:ff:ff
    altname enxf4e9d4afd182
8: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether b8:ca:3a:f7:15:cf brd ff:ff:ff:ff:ff:ff
    inet 192.168.25.205/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::baca:3aff:fef7:15cf/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever

Code:
cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface nic0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.25.205/24
        gateway 192.168.25.1
        bridge-ports nic0
        bridge-stp off
        bridge-fd 0

as i can see in /etc/network/interfaces have no vlan ware config enable.

In this case, You must enable VLAN Aware on vmbr0.
and end sure that connect vlans trunk port to nic0 or enp1s0f0 to vmbr0.

/wichet s.
 
It simply only accepts interfaces that start with "eth", bond" and "en". So definitely a good call to open a bug report.
FWIW I called this out in another thread recently too. The pinning tool https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#_using_the_pve_network_interface_pinning_tool says it will use nic* but also ends with paragraph, “It is recommended to assign a name starting with en or eth so that Proxmox VE recognizes the interface as a physical network device which can then be configured via the GUI...”
 
I've run into this today after I performed an full-upgrade and rebooted one node in a cluster. Now I am unable to migrate VMs to that node.

All physical interfaces start with "eth". I do have a bond0 interface that is used for the bridged port for vmbr0.
 
Stefan,

I applied the patch info to the IPRoute2.pm file and was able to successfully migrate a VM as a test. I did see the following in the output during the migration:

Code:
2025-12-16 10:37:37 use dedicated network address for sending migration traffic (10.XX.XX.XX)
2025-12-16 10:37:37 starting migration of VM 127 to node 'node-201' (10.XX.XX.XX)
2025-12-16 10:37:38 starting VM 127 on remote node 'node-201'
2025-12-16 10:37:40 [node-201] Use of uninitialized value in string eq at /usr/share/perl5/PVE/IPRoute2.pm line 79.
2025-12-16 10:37:40 [node-201] Use of uninitialized value in string eq at /usr/share/perl5/PVE/IPRoute2.pm line 79.
2025-12-16 10:37:41 start remote tunnel
2025-12-16 10:37:42 ssh tunnel ver 1
2025-12-16 10:37:42 starting online/live migration on unix:/run/qemu-server/127.migrate
2025-12-16 10:37:42 set migration capabilities
2025-12-16 10:37:42 migration downtime limit: 100 ms
2025-12-16 10:37:42 migration cachesize: 2.0 GiB
2025-12-16 10:37:42 set migration parameters
2025-12-16 10:37:43 start migrate command to unix:/run/qemu-server/127.migrate
2025-12-16 10:37:44 migration active, transferred 139.5 MiB of 16.0 GiB VM-state, 220.9 MiB/s
2025-12-16 10:37:45 migration active, transferred 387.2 MiB of 16.0 GiB VM-state, 250.1 MiB/s
2025-12-16 10:37:46 migration active, transferred 630.4 MiB of 16.0 GiB VM-state, 298.7 MiB/s
2025-12-16 10:37:47 migration active, transferred 964.0 MiB of 16.0 GiB VM-state, 366.7 MiB/s
2025-12-16 10:37:48 migration active, transferred 1.3 GiB of 16.0 GiB VM-state, 490.5 MiB/s
2025-12-16 10:37:49 migration active, transferred 1.8 GiB of 16.0 GiB VM-state, 617.9 MiB/s
2025-12-16 10:37:50 migration active, transferred 2.3 GiB of 16.0 GiB VM-state, 524.1 MiB/s
2025-12-16 10:37:51 migration active, transferred 2.8 GiB of 16.0 GiB VM-state, 485.6 MiB/s
2025-12-16 10:37:52 migration active, transferred 3.3 GiB of 16.0 GiB VM-state, 509.8 MiB/s
2025-12-16 10:37:53 migration active, transferred 3.7 GiB of 16.0 GiB VM-state, 602.0 MiB/s
2025-12-16 10:37:54 migration active, transferred 4.2 GiB of 16.0 GiB VM-state, 558.3 MiB/s
2025-12-16 10:37:55 migration active, transferred 4.7 GiB of 16.0 GiB VM-state, 553.0 MiB/s
2025-12-16 10:37:56 migration active, transferred 5.1 GiB of 16.0 GiB VM-state, 480.7 MiB/s
2025-12-16 10:37:57 migration active, transferred 5.6 GiB of 16.0 GiB VM-state, 485.5 MiB/s
2025-12-16 10:37:58 migration active, transferred 6.1 GiB of 16.0 GiB VM-state, 538.5 MiB/s
2025-12-16 10:37:59 migration active, transferred 6.6 GiB of 16.0 GiB VM-state, 449.1 MiB/s
2025-12-16 10:38:00 migration active, transferred 7.0 GiB of 16.0 GiB VM-state, 422.4 MiB/s
2025-12-16 10:38:01 migration active, transferred 7.3 GiB of 16.0 GiB VM-state, 471.5 MiB/s
2025-12-16 10:38:02 migration active, transferred 7.7 GiB of 16.0 GiB VM-state, 514.5 MiB/s
2025-12-16 10:38:03 migration active, transferred 8.2 GiB of 16.0 GiB VM-state, 411.8 MiB/s
2025-12-16 10:38:04 migration active, transferred 8.6 GiB of 16.0 GiB VM-state, 428.6 MiB/s
2025-12-16 10:38:05 migration active, transferred 8.9 GiB of 16.0 GiB VM-state, 425.6 MiB/s
2025-12-16 10:38:06 migration active, transferred 9.3 GiB of 16.0 GiB VM-state, 323.9 MiB/s
2025-12-16 10:38:07 migration active, transferred 9.6 GiB of 16.0 GiB VM-state, 407.8 MiB/s
2025-12-16 10:38:08 migration active, transferred 10.0 GiB of 16.0 GiB VM-state, 827.7 MiB/s
2025-12-16 10:38:09 migration active, transferred 10.5 GiB of 16.0 GiB VM-state, 483.1 MiB/s
2025-12-16 10:38:10 migration active, transferred 11.0 GiB of 16.0 GiB VM-state, 555.9 MiB/s
2025-12-16 10:38:11 migration active, transferred 11.5 GiB of 16.0 GiB VM-state, 509.7 MiB/s
2025-12-16 10:38:12 average migration speed: 565.5 MiB/s - downtime 153 ms
2025-12-16 10:38:12 migration completed, transferred 11.8 GiB VM-state
2025-12-16 10:38:12 migration status: completed
2025-12-16 10:38:12 stopping migration dbus-vmstate helpers
2025-12-16 10:38:12 migrated 0 conntrack state entries
2025-12-16 10:38:15 flushing conntrack state for guest on source node
2025-12-16 10:38:18 migration finished successfully (duration 00:00:41)
TASK OK
 
Last edited:
Stefan,

I applied the patch info to the IPRoute2.pm file and was able to successfully migrate a VM as a test. I did see the following in the output during the migration:

Code:
2025-12-16 10:37:37 use dedicated network address for sending migration traffic (10.XX.XX.XX)
2025-12-16 10:37:37 starting migration of VM 127 to node 'node-201' (10.XX.XX.XX)
2025-12-16 10:37:38 starting VM 127 on remote node 'node-201'
2025-12-16 10:37:40 [node-201] Use of uninitialized value in string eq at /usr/share/perl5/PVE/IPRoute2.pm line 79.
2025-12-16 10:37:40 [node-201] Use of uninitialized value in string eq at /usr/share/perl5/PVE/IPRoute2.pm line 79.
2025-12-16 10:37:41 start remote tunnel
2025-12-16 10:37:42 ssh tunnel ver 1
2025-12-16 10:37:42 starting online/live migration on unix:/run/qemu-server/127.migrate
2025-12-16 10:37:42 set migration capabilities
2025-12-16 10:37:42 migration downtime limit: 100 ms
2025-12-16 10:37:42 migration cachesize: 2.0 GiB
2025-12-16 10:37:42 set migration parameters
2025-12-16 10:37:43 start migrate command to unix:/run/qemu-server/127.migrate
2025-12-16 10:37:44 migration active, transferred 139.5 MiB of 16.0 GiB VM-state, 220.9 MiB/s
2025-12-16 10:37:45 migration active, transferred 387.2 MiB of 16.0 GiB VM-state, 250.1 MiB/s
2025-12-16 10:37:46 migration active, transferred 630.4 MiB of 16.0 GiB VM-state, 298.7 MiB/s
2025-12-16 10:37:47 migration active, transferred 964.0 MiB of 16.0 GiB VM-state, 366.7 MiB/s
2025-12-16 10:37:48 migration active, transferred 1.3 GiB of 16.0 GiB VM-state, 490.5 MiB/s
2025-12-16 10:37:49 migration active, transferred 1.8 GiB of 16.0 GiB VM-state, 617.9 MiB/s
2025-12-16 10:37:50 migration active, transferred 2.3 GiB of 16.0 GiB VM-state, 524.1 MiB/s
2025-12-16 10:37:51 migration active, transferred 2.8 GiB of 16.0 GiB VM-state, 485.6 MiB/s
2025-12-16 10:37:52 migration active, transferred 3.3 GiB of 16.0 GiB VM-state, 509.8 MiB/s
2025-12-16 10:37:53 migration active, transferred 3.7 GiB of 16.0 GiB VM-state, 602.0 MiB/s
2025-12-16 10:37:54 migration active, transferred 4.2 GiB of 16.0 GiB VM-state, 558.3 MiB/s
2025-12-16 10:37:55 migration active, transferred 4.7 GiB of 16.0 GiB VM-state, 553.0 MiB/s
2025-12-16 10:37:56 migration active, transferred 5.1 GiB of 16.0 GiB VM-state, 480.7 MiB/s
2025-12-16 10:37:57 migration active, transferred 5.6 GiB of 16.0 GiB VM-state, 485.5 MiB/s
2025-12-16 10:37:58 migration active, transferred 6.1 GiB of 16.0 GiB VM-state, 538.5 MiB/s
2025-12-16 10:37:59 migration active, transferred 6.6 GiB of 16.0 GiB VM-state, 449.1 MiB/s
2025-12-16 10:38:00 migration active, transferred 7.0 GiB of 16.0 GiB VM-state, 422.4 MiB/s
2025-12-16 10:38:01 migration active, transferred 7.3 GiB of 16.0 GiB VM-state, 471.5 MiB/s
2025-12-16 10:38:02 migration active, transferred 7.7 GiB of 16.0 GiB VM-state, 514.5 MiB/s
2025-12-16 10:38:03 migration active, transferred 8.2 GiB of 16.0 GiB VM-state, 411.8 MiB/s
2025-12-16 10:38:04 migration active, transferred 8.6 GiB of 16.0 GiB VM-state, 428.6 MiB/s
2025-12-16 10:38:05 migration active, transferred 8.9 GiB of 16.0 GiB VM-state, 425.6 MiB/s
2025-12-16 10:38:06 migration active, transferred 9.3 GiB of 16.0 GiB VM-state, 323.9 MiB/s
2025-12-16 10:38:07 migration active, transferred 9.6 GiB of 16.0 GiB VM-state, 407.8 MiB/s
2025-12-16 10:38:08 migration active, transferred 10.0 GiB of 16.0 GiB VM-state, 827.7 MiB/s
2025-12-16 10:38:09 migration active, transferred 10.5 GiB of 16.0 GiB VM-state, 483.1 MiB/s
2025-12-16 10:38:10 migration active, transferred 11.0 GiB of 16.0 GiB VM-state, 555.9 MiB/s
2025-12-16 10:38:11 migration active, transferred 11.5 GiB of 16.0 GiB VM-state, 509.7 MiB/s
2025-12-16 10:38:12 average migration speed: 565.5 MiB/s - downtime 153 ms
2025-12-16 10:38:12 migration completed, transferred 11.8 GiB VM-state
2025-12-16 10:38:12 migration status: completed
2025-12-16 10:38:12 stopping migration dbus-vmstate helpers
2025-12-16 10:38:12 migrated 0 conntrack state entries
2025-12-16 10:38:15 flushing conntrack state for guest on source node
2025-12-16 10:38:18 migration finished successfully (duration 00:00:41)
TASK OK

Thanks for testing! Sent a revised patch:
https://lore.proxmox.com/pve-devel/20251216160513.360391-1-s.hanreich@proxmox.com/T/#u
 

Stefan,

I've updated with the latest patch, here's the latest output when migrating a VM:

Code:
2025-12-16 11:13:46 use dedicated network address for sending migration traffic (10.XX.XX.XX)
2025-12-16 11:13:46 starting migration of VM 273 to node 'node-201' (10.XX.XX.XX)
2025-12-16 11:13:46 starting VM 273 on remote node 'node-201'
2025-12-16 11:13:50 start remote tunnel
2025-12-16 11:13:51 ssh tunnel ver 1
2025-12-16 11:13:51 starting online/live migration on unix:/run/qemu-server/273.migrate
2025-12-16 11:13:51 set migration capabilities
2025-12-16 11:13:51 migration downtime limit: 100 ms
2025-12-16 11:13:51 migration cachesize: 2.0 GiB
2025-12-16 11:13:51 set migration parameters
2025-12-16 11:13:51 start migrate command to unix:/run/qemu-server/273.migrate
2025-12-16 11:13:52 migration active, transferred 536.4 MiB of 16.0 GiB VM-state, 558.3 MiB/s
2025-12-16 11:13:53 migration active, transferred 1.0 GiB of 16.0 GiB VM-state, 504.9 MiB/s
2025-12-16 11:13:54 migration active, transferred 1.5 GiB of 16.0 GiB VM-state, 502.5 MiB/s
2025-12-16 11:13:55 migration active, transferred 2.0 GiB of 16.0 GiB VM-state, 581.8 MiB/s
2025-12-16 11:13:56 migration active, transferred 2.5 GiB of 16.0 GiB VM-state, 543.8 MiB/s
2025-12-16 11:13:57 migration active, transferred 3.0 GiB of 16.0 GiB VM-state, 563.2 MiB/s
2025-12-16 11:13:58 migration active, transferred 3.5 GiB of 16.0 GiB VM-state, 626.3 MiB/s
2025-12-16 11:13:59 migration active, transferred 4.0 GiB of 16.0 GiB VM-state, 575.3 MiB/s
2025-12-16 11:14:00 migration active, transferred 4.5 GiB of 16.0 GiB VM-state, 517.1 MiB/s
2025-12-16 11:14:01 migration active, transferred 5.0 GiB of 16.0 GiB VM-state, 538.2 MiB/s
2025-12-16 11:14:02 migration active, transferred 5.5 GiB of 16.0 GiB VM-state, 507.3 MiB/s
2025-12-16 11:14:03 migration active, transferred 6.0 GiB of 16.0 GiB VM-state, 507.4 MiB/s
2025-12-16 11:14:04 migration active, transferred 6.5 GiB of 16.0 GiB VM-state, 568.0 MiB/s
2025-12-16 11:14:05 migration active, transferred 7.0 GiB of 16.0 GiB VM-state, 466.4 MiB/s
2025-12-16 11:14:06 migration active, transferred 7.4 GiB of 16.0 GiB VM-state, 456.8 MiB/s
2025-12-16 11:14:07 migration active, transferred 7.9 GiB of 16.0 GiB VM-state, 560.8 MiB/s
2025-12-16 11:14:08 migration active, transferred 8.3 GiB of 16.0 GiB VM-state, 452.0 MiB/s
2025-12-16 11:14:09 migration active, transferred 8.7 GiB of 16.0 GiB VM-state, 570.5 MiB/s
2025-12-16 11:14:10 migration active, transferred 9.3 GiB of 16.0 GiB VM-state, 572.9 MiB/s
2025-12-16 11:14:11 migration active, transferred 9.7 GiB of 16.0 GiB VM-state, 512.2 MiB/s
2025-12-16 11:14:12 migration active, transferred 10.1 GiB of 16.0 GiB VM-state, 303.0 MiB/s
2025-12-16 11:14:13 migration active, transferred 10.3 GiB of 16.0 GiB VM-state, 268.9 MiB/s
2025-12-16 11:14:14 migration active, transferred 10.6 GiB of 16.0 GiB VM-state, 264.6 MiB/s
2025-12-16 11:14:15 migration active, transferred 10.9 GiB of 16.0 GiB VM-state, 391.9 MiB/s
2025-12-16 11:14:16 migration active, transferred 11.4 GiB of 16.0 GiB VM-state, 415.9 MiB/s
2025-12-16 11:14:17 migration active, transferred 11.8 GiB of 16.0 GiB VM-state, 417.6 MiB/s
2025-12-16 11:14:18 migration active, transferred 12.2 GiB of 16.0 GiB VM-state, 427.3 MiB/s
2025-12-16 11:14:19 migration active, transferred 12.6 GiB of 16.0 GiB VM-state, 526.8 MiB/s
2025-12-16 11:14:20 migration active, transferred 13.1 GiB of 16.0 GiB VM-state, 524.3 MiB/s
2025-12-16 11:14:21 migration active, transferred 13.6 GiB of 16.0 GiB VM-state, 519.3 MiB/s
2025-12-16 11:14:22 migration active, transferred 14.1 GiB of 16.0 GiB VM-state, 528.9 MiB/s
2025-12-16 11:14:23 migration active, transferred 14.6 GiB of 16.0 GiB VM-state, 548.6 MiB/s
2025-12-16 11:14:25 average migration speed: 482.4 MiB/s - downtime 95 ms
2025-12-16 11:14:25 migration completed, transferred 15.4 GiB VM-state
2025-12-16 11:14:25 migration status: completed
2025-12-16 11:14:25 stopping migration dbus-vmstate helpers
2025-12-16 11:14:25 migrated 0 conntrack state entries
2025-12-16 11:14:27 flushing conntrack state for guest on source node
2025-12-16 11:14:30 migration finished successfully (duration 00:00:45)
TASK OK

Looks good! Thanks!
 
  • Like
Reactions: shanreich