Live VM Migration fails

from a node that fails more info:
Code:
root@pve5:[~]:# pveversion -v
proxmox-ve: 9.0.0 (running kernel: 6.14.8-2-pve)
pve-manager: 9.0.5 (running version: 9.0.5/9c5600b249dbfd2f)
proxmox-kernel-helper: 9.0.3
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
proxmox-kernel-6.14: 6.14.8-2
proxmox-kernel-6.8.12-13-pve-signed: 6.8.12-13
proxmox-kernel-6.8: 6.8.12-13
proxmox-kernel-6.8.12-9-pve-signed: 6.8.12-9
ceph: 19.2.3-pve1
ceph-fuse: 19.2.3-pve1
corosync: 3.1.9-pve2
criu: 4.1.1-1
dnsmasq: 2.91-1
frr-pythontools: 10.3.1-1+pve4
ifupdown2: 3.3.0-1+pmx9
intel-microcode: 3.20250512.1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.3
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.6
libpve-cluster-perl: 9.0.6
libpve-common-perl: 9.0.9
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.4
libpve-network-perl: 1.1.6
libpve-rs-perl: 0.10.7
libpve-storage-perl: 9.0.13
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.4-2
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.0.14-1
proxmox-backup-file-restore: 4.0.14-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.1.1
proxmox-kernel-helper: 9.0.3
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.0
proxmox-widget-toolkit: 5.0.5
pve-cluster: 9.0.6
pve-container: 6.0.9
pve-docs: 9.0.8
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.3
pve-firmware: 3.16-3
pve-ha-manager: 5.0.4
pve-i18n: 3.5.2
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
qemu-server: 9.0.16
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.3-pve1
 
from a node that is okay to migrate live to:

Code:
root@pve2:[~]:# pveversion -v
proxmox-ve: 9.0.0 (running kernel: 6.14.8-2-pve)
pve-manager: 9.0.5 (running version: 9.0.5/9c5600b249dbfd2f)
proxmox-kernel-helper: 9.0.3
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
proxmox-kernel-6.14: 6.14.8-2
proxmox-kernel-6.8.12-13-pve-signed: 6.8.12-13
proxmox-kernel-6.8: 6.8.12-13
ceph: 19.2.3-pve1
ceph-fuse: 19.2.3-pve1
corosync: 3.1.9-pve2
criu: 4.1.1-1
dnsmasq: 2.91-1
frr-pythontools: 10.3.1-1+pve4
ifupdown2: 3.3.0-1+pmx9
intel-microcode: 3.20250512.1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.3
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.6
libpve-cluster-perl: 9.0.6
libpve-common-perl: 9.0.9
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.4
libpve-network-perl: 1.1.6
libpve-rs-perl: 0.10.7
libpve-storage-perl: 9.0.13
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.4-2
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.0.14-1
proxmox-backup-file-restore: 4.0.14-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.1.1
proxmox-kernel-helper: 9.0.3
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.0
proxmox-widget-toolkit: 5.0.5
pve-cluster: 9.0.6
pve-container: 6.0.9
pve-docs: 9.0.8
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.3
pve-firmware: 3.16-3
pve-ha-manager: 5.0.4
pve-i18n: 3.5.2
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
pve-zsync: 2.4.0
qemu-server: 9.0.16
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.3-pve1
 
Hello,
I have the same issue. Migration of a VM with Q35 works fine, but a VM with Default (i440fx) ends with errors. Below are the migration errors and my Proxmox versions.

VM with i440fx (v10) – migration error:
Code:
2025-08-23 11:37:02 use dedicated network address for sending migration traffic (10.10.1.61)
2025-08-23 11:37:02 starting migration of VM 101 to node 'dell-r750-01' (10.10.1.61)
2025-08-23 11:37:02 starting VM 101 on remote node 'dell-r750-01'
2025-08-23 11:37:04 start remote tunnel
2025-08-23 11:37:05 ssh tunnel ver 1
2025-08-23 11:37:05 starting online/live migration on unix:/run/qemu-server/101.migrate
2025-08-23 11:37:05 set migration capabilities
2025-08-23 11:37:05 migration downtime limit: 100 ms
2025-08-23 11:37:05 migration cachesize: 512.0 MiB
2025-08-23 11:37:05 set migration parameters
2025-08-23 11:37:05 start migrate command to unix:/run/qemu-server/101.migrate
2025-08-23 11:37:06 migration active, transferred 282.7 MiB of 4.0 GiB VM-state, 473.1 MiB/s
2025-08-23 11:37:07 average migration speed: 2.0 GiB/s - downtime 49 ms
2025-08-23 11:37:07 migration completed, transferred 588.8 MiB VM-state
2025-08-23 11:37:07 migration status: completed
2025-08-23 11:37:07 ERROR: tunnel replied 'ERR: resume failed - VM 101 qmp command 'query-status' failed - client closed connection' to command 'resume 101'
2025-08-23 11:37:07 stopping migration dbus-vmstate helpers
2025-08-23 11:37:07 migrated 0 conntrack state entries
400 Parameter verification failed.
node: VM 101 not running locally on node 'dell-r750-01'
proxy handler failed: pvesh create <api_path> --action <string> [OPTIONS] [FORMAT_OPTIONS]
2025-08-23 11:37:10 failed to stop dbus-vmstate on dell-r750-01: command 'pvesh create /nodes/dell-r750-01/qemu/101/dbus-vmstate --action stop' failed: exit code 2
2025-08-23 11:37:10 flushing conntrack state for guest on source node
2025-08-23 11:37:13 ERROR: migration finished with problems (duration 00:00:12)
TASK ERROR: migration problems

VM with Q35 – migration works fine:
Code:
2025-08-23 11:36:21 use dedicated network address for sending migration traffic (10.10.1.61)
2025-08-23 11:36:21 starting migration of VM 100 to node 'dell-r750-01' (10.10.1.61)
2025-08-23 11:36:21 starting VM 100 on remote node 'dell-r750-01'
2025-08-23 11:36:23 start remote tunnel
2025-08-23 11:36:24 ssh tunnel ver 1
2025-08-23 11:36:24 starting online/live migration on unix:/run/qemu-server/100.migrate
2025-08-23 11:36:24 set migration capabilities
2025-08-23 11:36:24 migration downtime limit: 100 ms
2025-08-23 11:36:24 migration cachesize: 512.0 MiB
2025-08-23 11:36:24 set migration parameters
2025-08-23 11:36:24 start migrate command to unix:/run/qemu-server/100.migrate
2025-08-23 11:36:25 migration active, transferred 307.2 MiB of 4.0 GiB VM-state, 419.1 MiB/s
2025-08-23 11:36:26 average migration speed: 2.0 GiB/s - downtime 88 ms
2025-08-23 11:36:26 migration completed, transferred 606.9 MiB VM-state
2025-08-23 11:36:26 migration status: completed
2025-08-23 11:36:28 stopping migration dbus-vmstate helpers
2025-08-23 11:36:28 migrated 0 conntrack state entries
2025-08-23 11:36:30 flushing conntrack state for guest on source node
2025-08-23 11:36:33 migration finished successfully (duration 00:00:13)
TASK OK

Proxmox version on source node:
Code:
root@dell-r740-03:~# pveversion -v
proxmox-ve: 9.0.0 (running kernel: 6.14.8-2-pve)
pve-manager: 9.0.5 (running version: 9.0.5/9c5600b249dbfd2f)
proxmox-kernel-helper: 9.0.3
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
proxmox-kernel-6.14: 6.14.8-2
ceph-fuse: 19.2.3-pve1
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.3.1-1+pve4
ifupdown2: 3.3.0-1+pmx9
intel-microcode: 3.20250512.1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.3
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.6
libpve-cluster-perl: 9.0.6
libpve-common-perl: 9.0.9
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.4
libpve-network-perl: 1.1.6
libpve-rs-perl: 0.10.10
libpve-storage-perl: 9.0.13
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.4-2
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.0.14-1
proxmox-backup-file-restore: 4.0.14-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.1.1
proxmox-kernel-helper: 9.0.3
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.0
proxmox-widget-toolkit: 5.0.5
pve-cluster: 9.0.6
pve-container: 6.0.9
pve-docs: 9.0.8
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.3
pve-firmware: 3.16-3
pve-ha-manager: 5.0.4
pve-i18n: 3.5.2
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
qemu-server: 9.0.18
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.3-pve1

Proxmox version on target node:
Code:
root@dell-r750-01:~# pveversion -v
proxmox-ve: 9.0.0 (running kernel: 6.14.8-2-pve)
pve-manager: 9.0.5 (running version: 9.0.5/9c5600b249dbfd2f)
proxmox-kernel-helper: 9.0.3
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
proxmox-kernel-6.14: 6.14.8-2
ceph-fuse: 19.2.3-pve1
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.3.1-1+pve4
ifupdown2: 3.3.0-1+pmx9
intel-microcode: 3.20250512.1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.3
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.6
libpve-cluster-perl: 9.0.6
libpve-common-perl: 9.0.9
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.4
libpve-network-perl: 1.1.6
libpve-rs-perl: 0.10.10
libpve-storage-perl: 9.0.13
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.4-2
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.0.14-1
proxmox-backup-file-restore: 4.0.14-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.1.1
proxmox-kernel-helper: 9.0.3
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.0
proxmox-widget-toolkit: 5.0.5
pve-cluster: 9.0.6
pve-container: 6.0.9
pve-docs: 9.0.8
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.3
pve-firmware: 3.16-3
pve-ha-manager: 5.0.4
pve-i18n: 3.5.2
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
qemu-server: 9.0.18
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.3-pve1

It seems like live migration fails only with i440fx VMs, while Q35 VMs migrate without issues.

Could this be a bug in the current Proxmox version?
 
Last edited:
  • Like
Reactions: waltar
I solved this . the node that would have migrations fail to and from needed an adjustment in /etc/network/interfaces .

the other 4 nodes have vmbr3 set as:
Code:
auto vmbr3
iface vmbr3 inet static
        address 10.1.10.7/24
        gateway 10.1.10.1
        bridge-ports bond3
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-250
        mtu 9000

the node with migration issues had:
Code:
auto vmbr3
iface vmbr3 inet static
        address 10.1.10.15/24
        gateway 10.1.10.1
        bridge-ports bond3     
        bridge-stp off
        bridge-fd 0

putting the same settings to the fail node and reboot solved our migration issues
 
  • Like
Reactions: fiona and waltar
Hello,
I have the same issue. Migration of a VM with Q35 works fine, but a VM with Default (i440fx) ends with errors. Below are the migration errors and my Proxmox versions.

VM with i440fx (v10) – migration error:
Code:
2025-08-23 11:37:02 use dedicated network address for sending migration traffic (10.10.1.61)
2025-08-23 11:37:02 starting migration of VM 101 to node 'dell-r750-01' (10.10.1.61)
2025-08-23 11:37:02 starting VM 101 on remote node 'dell-r750-01'
2025-08-23 11:37:04 start remote tunnel
2025-08-23 11:37:05 ssh tunnel ver 1
2025-08-23 11:37:05 starting online/live migration on unix:/run/qemu-server/101.migrate
2025-08-23 11:37:05 set migration capabilities
2025-08-23 11:37:05 migration downtime limit: 100 ms
2025-08-23 11:37:05 migration cachesize: 512.0 MiB
2025-08-23 11:37:05 set migration parameters
2025-08-23 11:37:05 start migrate command to unix:/run/qemu-server/101.migrate
2025-08-23 11:37:06 migration active, transferred 282.7 MiB of 4.0 GiB VM-state, 473.1 MiB/s
2025-08-23 11:37:07 average migration speed: 2.0 GiB/s - downtime 49 ms
2025-08-23 11:37:07 migration completed, transferred 588.8 MiB VM-state
2025-08-23 11:37:07 migration status: completed
2025-08-23 11:37:07 ERROR: tunnel replied 'ERR: resume failed - VM 101 qmp command 'query-status' failed - client closed connection' to command 'resume 101'
2025-08-23 11:37:07 stopping migration dbus-vmstate helpers
2025-08-23 11:37:07 migrated 0 conntrack state entries
400 Parameter verification failed.
node: VM 101 not running locally on node 'dell-r750-01'
proxy handler failed: pvesh create <api_path> --action <string> [OPTIONS] [FORMAT_OPTIONS]
2025-08-23 11:37:10 failed to stop dbus-vmstate on dell-r750-01: command 'pvesh create /nodes/dell-r750-01/qemu/101/dbus-vmstate --action stop' failed: exit code 2
2025-08-23 11:37:10 flushing conntrack state for guest on source node
2025-08-23 11:37:13 ERROR: migration finished with problems (duration 00:00:12)
TASK ERROR: migration problems

VM with Q35 – migration works fine:
Code:
2025-08-23 11:36:21 use dedicated network address for sending migration traffic (10.10.1.61)
2025-08-23 11:36:21 starting migration of VM 100 to node 'dell-r750-01' (10.10.1.61)
2025-08-23 11:36:21 starting VM 100 on remote node 'dell-r750-01'
2025-08-23 11:36:23 start remote tunnel
2025-08-23 11:36:24 ssh tunnel ver 1
2025-08-23 11:36:24 starting online/live migration on unix:/run/qemu-server/100.migrate
2025-08-23 11:36:24 set migration capabilities
2025-08-23 11:36:24 migration downtime limit: 100 ms
2025-08-23 11:36:24 migration cachesize: 512.0 MiB
2025-08-23 11:36:24 set migration parameters
2025-08-23 11:36:24 start migrate command to unix:/run/qemu-server/100.migrate
2025-08-23 11:36:25 migration active, transferred 307.2 MiB of 4.0 GiB VM-state, 419.1 MiB/s
2025-08-23 11:36:26 average migration speed: 2.0 GiB/s - downtime 88 ms
2025-08-23 11:36:26 migration completed, transferred 606.9 MiB VM-state
2025-08-23 11:36:26 migration status: completed
2025-08-23 11:36:28 stopping migration dbus-vmstate helpers
2025-08-23 11:36:28 migrated 0 conntrack state entries
2025-08-23 11:36:30 flushing conntrack state for guest on source node
2025-08-23 11:36:33 migration finished successfully (duration 00:00:13)
TASK OK

Proxmox version on source node:
Code:
root@dell-r740-03:~# pveversion -v
proxmox-ve: 9.0.0 (running kernel: 6.14.8-2-pve)
pve-manager: 9.0.5 (running version: 9.0.5/9c5600b249dbfd2f)
proxmox-kernel-helper: 9.0.3
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
proxmox-kernel-6.14: 6.14.8-2
ceph-fuse: 19.2.3-pve1
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.3.1-1+pve4
ifupdown2: 3.3.0-1+pmx9
intel-microcode: 3.20250512.1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.3
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.6
libpve-cluster-perl: 9.0.6
libpve-common-perl: 9.0.9
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.4
libpve-network-perl: 1.1.6
libpve-rs-perl: 0.10.10
libpve-storage-perl: 9.0.13
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.4-2
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.0.14-1
proxmox-backup-file-restore: 4.0.14-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.1.1
proxmox-kernel-helper: 9.0.3
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.0
proxmox-widget-toolkit: 5.0.5
pve-cluster: 9.0.6
pve-container: 6.0.9
pve-docs: 9.0.8
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.3
pve-firmware: 3.16-3
pve-ha-manager: 5.0.4
pve-i18n: 3.5.2
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
qemu-server: 9.0.18
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.3-pve1

Proxmox version on target node:
Code:
root@dell-r750-01:~# pveversion -v
proxmox-ve: 9.0.0 (running kernel: 6.14.8-2-pve)
pve-manager: 9.0.5 (running version: 9.0.5/9c5600b249dbfd2f)
proxmox-kernel-helper: 9.0.3
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
proxmox-kernel-6.14: 6.14.8-2
ceph-fuse: 19.2.3-pve1
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.3.1-1+pve4
ifupdown2: 3.3.0-1+pmx9
intel-microcode: 3.20250512.1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.3
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.6
libpve-cluster-perl: 9.0.6
libpve-common-perl: 9.0.9
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.4
libpve-network-perl: 1.1.6
libpve-rs-perl: 0.10.10
libpve-storage-perl: 9.0.13
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.4-2
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.0.14-1
proxmox-backup-file-restore: 4.0.14-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.1.1
proxmox-kernel-helper: 9.0.3
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.0
proxmox-widget-toolkit: 5.0.5
pve-cluster: 9.0.6
pve-container: 6.0.9
pve-docs: 9.0.8
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.3
pve-firmware: 3.16-3
pve-ha-manager: 5.0.4
pve-i18n: 3.5.2
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
qemu-server: 9.0.18
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.3-pve1

It seems like live migration fails only with i440fx VMs, while Q35 VMs migrate without issues.

Could this be a bug in the current Proxmox version?
Hi, i have the same error

Migration of a VM with Q35 works fine, but a VM with Default (i440fx) ends with errors.

Code:
()
2025-09-01 15:25:31 starting migration of VM 146245 to node 'xxxx' (xxxx)
2025-09-01 15:25:31 starting VM 146245 on remote node 'xxxxx'
2025-09-01 15:25:33 start remote tunnel
2025-09-01 15:25:34 ssh tunnel ver 1
2025-09-01 15:25:34 starting online/live migration on unix:/run/qemu-server/146245.migrate
2025-09-01 15:25:34 set migration capabilities
2025-09-01 15:25:34 migration downtime limit: 100 ms
2025-09-01 15:25:34 migration cachesize: 512.0 MiB
2025-09-01 15:25:34 set migration parameters
2025-09-01 15:25:34 start migrate command to unix:/run/qemu-server/146245.migrate
2025-09-01 15:25:35 migration active, transferred 307.7 MiB of 4.0 GiB VM-state, 652.5 MiB/s
2025-09-01 15:25:36 migration active, transferred 867.3 MiB of 4.0 GiB VM-state, 647.1 MiB/s
2025-09-01 15:25:37 average migration speed: 1.3 GiB/s - downtime 65 ms
2025-09-01 15:25:37 migration completed, transferred 1.6 GiB VM-state
2025-09-01 15:25:37 migration status: completed
2025-09-01 15:25:37 ERROR: tunnel replied 'ERR: resume failed - VM 146245 qmp command 'query-status' failed - client closed connection' to command 'resume 146245'
VM quit/powerdown failed - terminating now with SIGTERM
2025-09-01 15:25:45 ERROR: migration finished with problems (duration 00:00:14)
TASK ERROR: migration problems


Code:
2025-09-01 15:24:39 conntrack state migration not supported or disabled, active connections might get dropped
2025-09-01 15:24:40 starting migration of VM 146245 to node 'xxx' ()
2025-09-01 15:24:40 starting VM 146245 on remote node 'xxx'
2025-09-01 15:24:41 start remote tunnel
2025-09-01 15:24:42 ssh tunnel ver 1
2025-09-01 15:24:42 starting online/live migration on unix:/run/qemu-server/146245.migrate
2025-09-01 15:24:42 set migration capabilities
2025-09-01 15:24:42 migration downtime limit: 100 ms
2025-09-01 15:24:42 migration cachesize: 512.0 MiB
2025-09-01 15:24:42 set migration parameters
2025-09-01 15:24:42 start migrate command to unix:/run/qemu-server/146245.migrate
2025-09-01 15:24:43 migration active, transferred 490.4 MiB of 4.0 GiB VM-state, 575.3 MiB/s
2025-09-01 15:24:44 migration active, transferred 1013.9 MiB of 4.0 GiB VM-state, 521.9 MiB/s
2025-09-01 15:24:45 migration active, transferred 1.6 GiB of 4.0 GiB VM-state, 752.5 MiB/s
2025-09-01 15:24:46 average migration speed: 1.0 GiB/s - downtime 26 ms
2025-09-01 15:24:46 migration completed, transferred 1.9 GiB VM-state
2025-09-01 15:24:46 migration status: completed
2025-09-01 15:24:50 migration finished successfully (duration 00:00:11)
TASK OK

Source node
Code:
pveversion -v
proxmox-ve: 8.4.0 (running kernel: 6.8.4-3-pve)
pve-manager: 8.4.12 (running version: 8.4.12/c2ea8261d32a5020)
proxmox-kernel-helper: 8.1.4
proxmox-kernel-6.8.12-14-pve-signed: 6.8.12-14
proxmox-kernel-6.8: 6.8.12-14
proxmox-kernel-6.8.12-13-pve-signed: 6.8.12-13
proxmox-kernel-6.8.12-11-pve-signed: 6.8.12-11
proxmox-kernel-6.8.12-10-pve-signed: 6.8.12-10
proxmox-kernel-6.8.12-8-pve-signed: 6.8.12-8
proxmox-kernel-6.8.12-6-pve-signed: 6.8.12-6
proxmox-kernel-6.8.12-5-pve-signed: 6.8.12-5
proxmox-kernel-6.8.12-4-pve-signed: 6.8.12-4
proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
ceph: 19.2.2-pve1~bpo12+1
ceph-fuse: 19.2.2-pve1~bpo12+1
corosync: 3.1.9-pve1
criu: 3.17.1-2+deb12u1
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.30-pve2
libproxmox-acme-perl: 1.6.0
libproxmox-backup-qemu0: 1.5.2
libproxmox-rs-perl: 0.3.5
libpve-access-control: 8.2.2
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.1.2
libpve-cluster-perl: 8.1.2
libpve-common-perl: 8.3.4
libpve-guest-common-perl: 5.2.2
libpve-http-server-perl: 5.2.2
libpve-network-perl: 0.11.2
libpve-rs-perl: 0.9.4
libpve-storage-perl: 8.3.7
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.6.0-2
openvswitch-switch: 3.1.0-2+deb12u1
proxmox-backup-client: 3.4.6-1
proxmox-backup-file-restore: 3.4.6-1
proxmox-backup-restore-image: 0.7.0
proxmox-firewall: 0.7.1
proxmox-kernel-helper: 8.1.4
proxmox-mail-forward: 0.3.3
proxmox-mini-journalreader: 1.5
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.13
pve-cluster: 8.1.2
pve-container: 5.3.0
pve-docs: 8.4.1
pve-edk2-firmware: 4.2025.02-4~bpo12+1
pve-esxi-import-tools: 0.7.4
pve-firewall: 5.1.2
pve-firmware: 3.16-3
pve-ha-manager: 4.0.7
pve-i18n: 3.4.5
pve-qemu-kvm: 9.2.0-7
pve-xtermjs: 5.5.0-2
pve-zsync: 2.3.1
qemu-server: 8.4.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.8-pve1

Target node
Code:
pveversion -v
proxmox-ve: 9.0.0 (running kernel: 6.14.11-1-pve)
pve-manager: 9.0.6 (running version: 9.0.6/49c767b70aeb6648)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-6.14.11-1-pve-signed: 6.14.11-1
proxmox-kernel-6.14: 6.14.11-1
proxmox-kernel-6.8.12-14-pve-signed: 6.8.12-14
proxmox-kernel-6.8: 6.8.12-14
proxmox-kernel-6.8.12-13-pve-signed: 6.8.12-13
proxmox-kernel-6.8.12-11-pve-signed: 6.8.12-11
proxmox-kernel-6.8.12-10-pve-signed: 6.8.12-10
proxmox-kernel-6.8.12-8-pve-signed: 6.8.12-8
proxmox-kernel-6.8.12-6-pve-signed: 6.8.12-6
proxmox-kernel-6.8.12-5-pve-signed: 6.8.12-5
proxmox-kernel-6.8.12-4-pve-signed: 6.8.12-4
proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
amd64-microcode: 3.20250311.1
ceph: 19.2.3-pve1
ceph-fuse: 19.2.3-pve1
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.3.1-1+pve4
ifupdown2: 3.3.0-1+pmx10
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.3
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.6
libpve-cluster-perl: 9.0.6
libpve-common-perl: 9.0.9
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.4
libpve-network-perl: 1.1.6
libpve-rs-perl: 0.10.10
libpve-storage-perl: 9.0.13
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.4-2
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
openvswitch-switch: 3.5.0-1+b1
proxmox-backup-client: 4.0.14-1
proxmox-backup-file-restore: 4.0.14-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.1.2
proxmox-kernel-helper: 9.0.4
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.1
proxmox-widget-toolkit: 5.0.5
pve-cluster: 9.0.6
pve-container: 6.0.9
pve-docs: 9.0.8
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.3
pve-firmware: 3.16-4
pve-ha-manager: 5.0.4
pve-i18n: 3.5.2
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
pve-zsync: 2.4.0
qemu-server: 9.0.19
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.4-pve1
 
Hi,

@RobFantini @dmembibre @jjadczak could you share the configuration of an affected VM for completeness, i.e. qm config <ID> replacing <ID> with the actual ID of the VM? What is the error message in the target node's system journal at the time of the failed migrations?

@dmembibre @jjadczak Please also share the /etc/network/interfaces configuration from both source and target node.
 
Hello, I'll explain my situation a bit better:
We have a 9-node cluster, and until yesterday, VM migrations were working normally.
I updated one node to Proxmox 9, and any VM I try to migrate to it fails with an error. Yesterday, I tried changing the machine type to q35, and it worked, but today it no longer does. I just updated a second node to Proxmox 9 today, and migrations between the two Proxmox 9 nodes are working fine.

Below is the requested information. I'll provide 2 examples of the virtual machines, but they are all experiencing the same issue.

Code:
qm config 1001002
boot: order=virtio0;net0;ide2
cores: 4
cpu: x86-64-v4
ide2: misc_proxmox:iso/OPNsense-25.1-dvd-amd64.iso,media=cdrom,size=2165500K
memory: 4096
meta: creation-qemu=6.2.0,ctime=1654701046
name: opnsense.testing
net0: virtio=0A:EF:E6:FA:E2:52,bridge=vmbr2
net1: virtio=C2:8C:2A:C7:BE:62,bridge=vmbr2,firewall=1,link_down=1,tag=1001
numa: 1
ostype: l26
protection: 1
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=5d9d1030-29e2-4203-8658-4a498dfff92d
sockets: 1
vcpus: 4
virtio0: vms_ceph:vm-1001002-disk-0,size=12G
vmgenid: 57aab9a5-293b-4cee-aed2-cd15697623b8

Migration logs

Code:
()
2025-09-02 10:18:31 starting migration of VM 1001002 to node 'proxmox-2401' (10.177.124.195)
2025-09-02 10:18:31 starting VM 1001002 on remote node 'proxmox-2401'
2025-09-02 10:18:33 start remote tunnel
2025-09-02 10:18:34 ssh tunnel ver 1
2025-09-02 10:18:34 starting online/live migration on unix:/run/qemu-server/1001002.migrate
2025-09-02 10:18:34 set migration capabilities
2025-09-02 10:18:34 migration downtime limit: 100 ms
2025-09-02 10:18:34 migration cachesize: 512.0 MiB
2025-09-02 10:18:34 set migration parameters
2025-09-02 10:18:34 start migrate command to unix:/run/qemu-server/1001002.migrate
2025-09-02 10:18:35 migration active, transferred 466.2 MiB of 4.0 GiB VM-state, 786.5 MiB/s
2025-09-02 10:18:36 migration active, transferred 922.8 MiB of 4.0 GiB VM-state, 959.2 MiB/s
2025-09-02 10:18:37 migration active, transferred 1.4 GiB of 4.0 GiB VM-state, 836.6 MiB/s
2025-09-02 10:18:38 average migration speed: 1.0 GiB/s - downtime 32 ms
2025-09-02 10:18:38 migration completed, transferred 1.5 GiB VM-state
2025-09-02 10:18:38 migration status: completed
2025-09-02 10:18:38 ERROR: tunnel replied 'ERR: resume failed - VM 1001002 not running' to command 'resume 1001002'
VM quit/powerdown failed - terminating now with SIGTERM
2025-09-02 10:18:46 ERROR: migration finished with problems (duration 00:00:15)
TASK ERROR: migration problems


Sep 02 10:18:32 proxmox-2401 qm[2055847]: start VM 1001002: UPID:proxmox-2401:001F5EA7:006CFA9D:68B6A858:qmstart:1001002:root@pam:
Sep 02 10:18:32 proxmox-2401 qm[2055846]: <root@pam> starting task UPID:proxmox-2401:001F5EA7:006CFA9D:68B6A858:qmstart:1001002:root@pam:
Sep 02 10:18:32 proxmox-2401 systemd[1]: Started 1001002.scope.
Sep 02 10:18:33 proxmox-2401 kernel: tap1001002i0: entered promiscuous mode
Sep 02 10:18:33 proxmox-2401 ovs-vsctl[2055899]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap1001002i0
Sep 02 10:18:33 proxmox-2401 ovs-vsctl[2055899]: ovs|00002|db_ctl_base|ERR|no port named tap1001002i0
Sep 02 10:18:33 proxmox-2401 ovs-vsctl[2055901]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln1001002i0
Sep 02 10:18:33 proxmox-2401 ovs-vsctl[2055901]: ovs|00002|db_ctl_base|ERR|no port named fwln1001002i0
Sep 02 10:18:33 proxmox-2401 ovs-vsctl[2055902]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl -- add-port vmbr2 tap1001002i0 -- set Interface tap1001002i0 mtu_request=9000
Sep 02 10:18:33 proxmox-2401 kernel: tap1001002i1: entered promiscuous mode
Sep 02 10:18:33 proxmox-2401 ovs-vsctl[2055923]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap1001002i1
Sep 02 10:18:33 proxmox-2401 ovs-vsctl[2055923]: ovs|00002|db_ctl_base|ERR|no port named tap1001002i1
Sep 02 10:18:33 proxmox-2401 ovs-vsctl[2055925]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln1001002i1
Sep 02 10:18:33 proxmox-2401 ovs-vsctl[2055925]: ovs|00002|db_ctl_base|ERR|no port named fwln1001002i1
Sep 02 10:18:33 proxmox-2401 kernel: fwbr1001002i1: port 1(tap1001002i1) entered blocking state
Sep 02 10:18:33 proxmox-2401 kernel: fwbr1001002i1: port 1(tap1001002i1) entered disabled state
Sep 02 10:18:33 proxmox-2401 kernel: tap1001002i1: entered allmulticast mode
Sep 02 10:18:33 proxmox-2401 kernel: fwbr1001002i1: port 1(tap1001002i1) entered blocking state
Sep 02 10:18:33 proxmox-2401 kernel: fwbr1001002i1: port 1(tap1001002i1) entered forwarding state
Sep 02 10:18:33 proxmox-2401 ovs-vsctl[2055935]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl -- add-port vmbr2 fwln1001002o1 tag=1001 -- set Interface fwln1001002o1 mtu_request=9000 -- set Interface fwln1001002o1 type=internal
Sep 02 10:18:33 proxmox-2401 kernel: fwln1001002o1: entered promiscuous mode
Sep 02 10:18:33 proxmox-2401 kernel: fwbr1001002i1: port 2(fwln1001002o1) entered blocking state
Sep 02 10:18:33 proxmox-2401 kernel: fwbr1001002i1: port 2(fwln1001002o1) entered disabled state
Sep 02 10:18:33 proxmox-2401 kernel: fwln1001002o1: entered allmulticast mode
Sep 02 10:18:33 proxmox-2401 kernel: fwbr1001002i1: port 2(fwln1001002o1) entered blocking state
Sep 02 10:18:33 proxmox-2401 kernel: fwbr1001002i1: port 2(fwln1001002o1) entered forwarding state
Sep 02 10:18:33 proxmox-2401 qm[2055847]: VM 1001002 started with PID 2055858.
Sep 02 10:18:33 proxmox-2401 qm[2055846]: <root@pam> end task UPID:proxmox-2401:001F5EA7:006CFA9D:68B6A858:qmstart:1001002:root@pam: OK
Sep 02 10:18:38 proxmox-2401 kernel: tap1001002i1: left allmulticast mode
Sep 02 10:18:38 proxmox-2401 kernel: fwbr1001002i1: port 1(tap1001002i1) entered disabled state
Sep 02 10:18:38 proxmox-2401 ovs-vsctl[2056178]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln1001002o1
Sep 02 10:18:38 proxmox-2401 kernel: fwbr1001002i1: port 2(fwln1001002o1) entered disabled state
Sep 02 10:18:38 proxmox-2401 kernel: fwln1001002o1 (unregistering): left allmulticast mode
Sep 02 10:18:38 proxmox-2401 kernel: fwln1001002o1 (unregistering): left promiscuous mode
Sep 02 10:18:38 proxmox-2401 kernel: fwbr1001002i1: port 2(fwln1001002o1) entered disabled state
Sep 02 10:18:38 proxmox-2401 ovs-vsctl[2056183]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln1001002i1
Sep 02 10:18:38 proxmox-2401 ovs-vsctl[2056183]: ovs|00002|db_ctl_base|ERR|no port named fwln1001002i1
Sep 02 10:18:38 proxmox-2401 ovs-vsctl[2056186]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap1001002i1
Sep 02 10:18:38 proxmox-2401 ovs-vsctl[2056186]: ovs|00002|db_ctl_base|ERR|no port named tap1001002i1
Sep 02 10:18:38 proxmox-2401 ovs-vsctl[2056188]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln1001002i0
Sep 02 10:18:38 proxmox-2401 ovs-vsctl[2056188]: ovs|00002|db_ctl_base|ERR|no port named fwln1001002i0
Sep 02 10:18:38 proxmox-2401 ovs-vsctl[2056189]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap1001002i0
Sep 02 10:18:38 proxmox-2401 systemd[1]: 1001002.scope: Deactivated successfully.
Sep 02 10:18:38 proxmox-2401 systemd[1]: 1001002.scope: Consumed 2.564s CPU time, 3.2G memory peak.
Sep 02 10:18:38 proxmox-2401 qmeventd[2056193]: Starting cleanup for 1001002
Sep 02 10:18:38 proxmox-2401 ovs-vsctl[2056195]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln1001002i1
Sep 02 10:18:38 proxmox-2401 ovs-vsctl[2056195]: ovs|00002|db_ctl_base|ERR|no port named fwln1001002i1
Sep 02 10:18:38 proxmox-2401 ovs-vsctl[2056196]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap1001002i1
Sep 02 10:18:38 proxmox-2401 ovs-vsctl[2056196]: ovs|00002|db_ctl_base|ERR|no port named tap1001002i1
Sep 02 10:18:38 proxmox-2401 ovs-vsctl[2056197]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln1001002i0
Sep 02 10:18:38 proxmox-2401 ovs-vsctl[2056197]: ovs|00002|db_ctl_base|ERR|no port named fwln1001002i0
Sep 02 10:18:38 proxmox-2401 ovs-vsctl[2056198]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap1001002i0
Sep 02 10:18:38 proxmox-2401 ovs-vsctl[2056198]: ovs|00002|db_ctl_base|ERR|no port named tap1001002i0
Sep 02 10:18:38 proxmox-2401 qmeventd[2056193]: Finished cleanup for 1001002

Code:
qm config 146245
agent: 1,fstrim_cloned_disks=1
boot: order=ide2;virtio0;net0
cores: 2
ide0: misc_proxmox:iso/virtio-win-0.1.266.iso,media=cdrom,size=707456K
ide2: misc_proxmox:iso/es-es_windows_server_2022_x64_dvd_c25dea55__1_.iso,media=cdrom,size=5432946K
memory: 4096
meta: creation-qemu=9.2.0,ctime=1743756506
name: windows-ansible
net0: virtio=BC:24:11:BB:6A:7C,bridge=vmbr2,tag=146
protection: 1
smbios1: uuid=cc622a15-1ebd-4916-96a7-25daeefcf45b
spice_enhancements: foldersharing=1,videostreaming=all
tags: irontec-cloudit
usb0: spice,usb3=1
vcpus: 2
virtio0: vms_ceph:vm-146245-disk-0,cache=writeback,iothread=1,size=40G
vmgenid: d39425ea-3c3f-4c04-98f6-139a4e21cf21

Migration logs

Code:
2025-09-02 10:43:11 starting migration of VM 146245 to node 'proxmox-2401' (10.177.124.195)
2025-09-02 10:43:11 starting VM 146245 on remote node 'proxmox-2401'
2025-09-02 10:43:13 start remote tunnel
2025-09-02 10:43:13 ssh tunnel ver 1
2025-09-02 10:43:13 starting online/live migration on unix:/run/qemu-server/146245.migrate
2025-09-02 10:43:13 set migration capabilities
2025-09-02 10:43:13 migration downtime limit: 100 ms
2025-09-02 10:43:13 migration cachesize: 512.0 MiB
2025-09-02 10:43:13 set migration parameters
2025-09-02 10:43:13 start migrate command to unix:/run/qemu-server/146245.migrate
2025-09-02 10:43:14 migration active, transferred 194.4 MiB of 4.0 GiB VM-state, 386.0 MiB/s
2025-09-02 10:43:15 average migration speed: 2.0 GiB/s - downtime 49 ms
2025-09-02 10:43:15 migration completed, transferred 367.9 MiB VM-state
2025-09-02 10:43:15 migration status: completed
2025-09-02 10:43:15 ERROR: tunnel replied 'ERR: resume failed - VM 146245 not running' to command 'resume 146245'
VM quit/powerdown failed - terminating now with SIGTERM
2025-09-02 10:43:24 ERROR: migration finished with problems (duration 00:00:13)
TASK ERROR: migration problems







Sep 02 10:43:12 proxmox-2401 qm[2098687]: <root@pam> starting task UPID:proxmox-2401:00200600:006F3CB1:68B6AE20:qmstart:146245:root@pam:

Sep 02 10:43:12 proxmox-2401 qm[2098688]: start VM 146245: UPID:proxmox-2401:00200600:006F3CB1:68B6AE20:qmstart:146245:root@pam:

Sep 02 10:43:12 proxmox-2401 systemd[1]: Started 146245.scope.

Sep 02 10:43:13 proxmox-2401 kernel: tap146245i0: entered promiscuous mode

Sep 02 10:43:13 proxmox-2401 ovs-vsctl[2098741]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap146245i0

Sep 02 10:43:13 proxmox-2401 ovs-vsctl[2098741]: ovs|00002|db_ctl_base|ERR|no port named tap146245i0

Sep 02 10:43:13 proxmox-2401 ovs-vsctl[2098743]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln146245i0

Sep 02 10:43:13 proxmox-2401 ovs-vsctl[2098743]: ovs|00002|db_ctl_base|ERR|no port named fwln146245i0

Sep 02 10:43:13 proxmox-2401 ovs-vsctl[2098745]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl -- add-port vmbr2 tap146245i0 tag=146 -- set Interface tap146245i0 mtu_request=9000

Sep 02 10:43:13 proxmox-2401 qm[2098688]: VM 146245 started with PID 2098707.

Sep 02 10:43:13 proxmox-2401 qm[2098687]: <root@pam> end task UPID:proxmox-2401:00200600:006F3CB1:68B6AE20:qmstart:146245:root@pam: OK

Sep 02 10:43:15 proxmox-2401 ovs-vsctl[2098811]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln146245i0

Sep 02 10:43:15 proxmox-2401 ovs-vsctl[2098811]: ovs|00002|db_ctl_base|ERR|no port named fwln146245i0

Sep 02 10:43:15 proxmox-2401 ovs-vsctl[2098812]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap146245i0

Sep 02 10:43:16 proxmox-2401 systemd[1]: 146245.scope: Deactivated successfully.

Sep 02 10:43:16 proxmox-2401 systemd[1]: 146245.scope: Consumed 1.157s CPU time, 483.6M memory peak.

Sep 02 10:43:16 proxmox-2401 qmeventd[2098820]: Starting cleanup for 146245

Sep 02 10:43:16 proxmox-2401 ovs-vsctl[2098824]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln146245i0

Sep 02 10:43:16 proxmox-2401 ovs-vsctl[2098824]: ovs|00002|db_ctl_base|ERR|no port named fwln146245i0

Sep 02 10:43:16 proxmox-2401 ovs-vsctl[2098825]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap146245i0

Sep 02 10:43:16 proxmox-2401 ovs-vsctl[2098825]: ovs|00002|db_ctl_base|ERR|no port named tap146245i0

Sep 02 10:43:16 proxmox-2401 qmeventd[2098820]: Finished cleanup for 146245

Sep 02 10:43:19 proxmox-2401 pvedaemon[2017284]: <dmembibre@irontec.com@Azure-AD> end task UPID:proxmox-2401:00200528:006F38F6:68B6AE16:vncproxy:146245:dmembibre@irontec.com@Azure-AD: O
K

Source node Proxmox 8
Code:
iface lo inet loopback

iface ens3f0np0 inet manual

auto vmbr0
iface vmbr0 inet dhcp
        bridge-ports ens3f0np0
        bridge-stp off
        bridge-fd 0


auto vmbr2
iface vmbr2 inet manual
  ovs_type OVSBridge
  ovs_ports enp129s0f0np0 vlan2 vlan21 vlan22
  ovs_mtu 9000

auto enp129s0f0np0
iface enp129s0f0np0 inet manual
  ovs_bridge vmbr2
  ovs_type OVSPort
  ovs_mtu 9000

# VLAN CLUSTER PROXMOX
auto vlan2
iface vlan2 inet static
    ovs_type OVSIntPort
    ovs_bridge vmbr2
    ovs_mtu 1500
    ovs_options tag=2
    address 10.177.124.196/27
    post-up ip route add 10.177.0.0/17 via 10.177.124.222
    post-up ip route add 10.80.254.0/23 via 10.177.124.222

# VLAN SAN
auto vlan21
iface vlan21 inet static
    ovs_type OVSIntPort
    ovs_bridge vmbr2
    ovs_mtu 1500
    ovs_options tag=21
    address 10.177.124.3/26


# VLAN CEPH
auto vlan22
iface vlan22 inet static
  ovs_type OVSIntPort
  ovs_bridge vmbr2
  ovs_options tag=22
  address 10.177.124.66/26
  ovs_mtu 9000

Target node Proxmox 9

Code:
cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface ens3f0np0 inet manual

auto vmbr0
iface vmbr0 inet dhcp
        bridge-ports ens3f0np0
        bridge-stp off
        bridge-fd 0

auto vmbr2
iface vmbr2 inet manual
  ovs_type OVSBridge
  ovs_ports enp129s0f0np0 vlan2 vlan21 vlan22
  ovs_mtu 9000

auto enp129s0f0np0
iface enp129s0f0np0 inet manual
  ovs_bridge vmbr2
  ovs_type OVSPort
  ovs_mtu 9000

# VLAN CLUSTER PROXMOX
auto vlan2
iface vlan2 inet static
    ovs_type OVSIntPort
    ovs_bridge vmbr2
    ovs_mtu 1500
    ovs_options tag=2
    address 10.177.124.195/27
    post-up ip route add 10.177.0.0/17 via 10.177.124.222
    post-up ip route add 10.80.254.0/23 via 10.177.124.222

# VLAN SAN
auto vlan21
iface vlan21 inet static
    ovs_type OVSIntPort
    ovs_bridge vmbr2
    ovs_mtu 1500
    ovs_options tag=21
    address 10.177.124.2/26


# VLAN CEPH
auto vlan22
iface vlan22 inet static
  ovs_type OVSIntPort
  ovs_bridge vmbr2
  ovs_options tag=22
  address 10.177.124.65/26
  ovs_mtu 9000

source /etc/network/interfaces.d/*
 
Are these the full logs or did you filter by VM ID? I'd expect there to be messages like
Code:
kvm: get_pci_config_device: Bad config data: i=0x10 read: 61 device: 1 cmask: ff wmask: c0 w1cmask:0
kvm: Failed to load PCIDevice:config
kvm: Failed to load virtio-net:virtio
kvm: error while loading state for instance 0x0 of device '0000:00:12.0/virtio-net'
kvm: Error while loading VM state: Invalid argument

I can reproduce that error with an i440fx if the bridge on target and source have different MTUs, thanks to @RobFantini and @jjadczak for those hints!

And in the changelog for PVE 9 we have:
Leaving the MTU field for a VirtIO vNIC unset now defaults to the bridge MTU, rather than MTU 1500.
which might be related.
 
Hi,

Full log

Code:
Sep 02 11:31:48 proxovhgra-2401 kernel: tap1001002i0: entered promiscuous mode
Sep 02 11:31:48 proxovhgra-2401 ovs-vsctl[2182816]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap1001002i0
Sep 02 11:31:48 proxovhgra-2401 ovs-vsctl[2182816]: ovs|00002|db_ctl_base|ERR|no port named tap1001002i0
Sep 02 11:31:48 proxovhgra-2401 ovs-vsctl[2182818]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln1001002i0
Sep 02 11:31:48 proxovhgra-2401 ovs-vsctl[2182818]: ovs|00002|db_ctl_base|ERR|no port named fwln1001002i0
Sep 02 11:31:48 proxovhgra-2401 ovs-vsctl[2182819]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl -- add-port vmbr2 tap1001002i0 -- set Interface tap1001002i0 mtu_request=9000
Sep 02 11:31:48 proxovhgra-2401 kernel: tap1001002i1: entered promiscuous mode
Sep 02 11:31:48 proxovhgra-2401 ovs-vsctl[2182840]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap1001002i1
Sep 02 11:31:48 proxovhgra-2401 ovs-vsctl[2182840]: ovs|00002|db_ctl_base|ERR|no port named tap1001002i1
Sep 02 11:31:49 proxovhgra-2401 ovs-vsctl[2182842]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln1001002i1
Sep 02 11:31:49 proxovhgra-2401 ovs-vsctl[2182842]: ovs|00002|db_ctl_base|ERR|no port named fwln1001002i1
Sep 02 11:31:49 proxovhgra-2401 kernel: fwbr1001002i1: port 1(tap1001002i1) entered blocking state
Sep 02 11:31:49 proxovhgra-2401 kernel: fwbr1001002i1: port 1(tap1001002i1) entered disabled state
Sep 02 11:31:49 proxovhgra-2401 kernel: tap1001002i1: entered allmulticast mode
Sep 02 11:31:49 proxovhgra-2401 kernel: fwbr1001002i1: port 1(tap1001002i1) entered blocking state
Sep 02 11:31:49 proxovhgra-2401 kernel: fwbr1001002i1: port 1(tap1001002i1) entered forwarding state
Sep 02 11:31:49 proxovhgra-2401 ovs-vsctl[2182851]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl -- add-port vmbr2 fwln1001002o1 tag=1001 -- set Interface fwln1001002o1 mtu_request=9000 -- set Interface fwln1001002o1 type=internal
Sep 02 11:31:49 proxovhgra-2401 kernel: fwln1001002o1: entered promiscuous mode
Sep 02 11:31:49 proxovhgra-2401 kernel: fwbr1001002i1: port 2(fwln1001002o1) entered blocking state
Sep 02 11:31:49 proxovhgra-2401 kernel: fwbr1001002i1: port 2(fwln1001002o1) entered disabled state
Sep 02 11:31:49 proxovhgra-2401 kernel: fwln1001002o1: entered allmulticast mode
Sep 02 11:31:49 proxovhgra-2401 kernel: fwbr1001002i1: port 2(fwln1001002o1) entered blocking state
Sep 02 11:31:49 proxovhgra-2401 kernel: fwbr1001002i1: port 2(fwln1001002o1) entered forwarding state
Sep 02 11:31:49 proxovhgra-2401 qm[2182523]: VM 1001002 started with PID 2182719.
Sep 02 11:31:49 proxovhgra-2401 qm[2182518]: <root@pam> end task UPID:proxovhgra-2401:00214D7B:0073AF82:68B6B983:qmstart:1001002:root@pam: OK
Sep 02 11:31:49 proxovhgra-2401 sshd-session[2182517]: Received disconnect from 10.177.124.196 port 33964:11: disconnected by user
Sep 02 11:31:49 proxovhgra-2401 sshd-session[2182517]: Disconnected from user root 10.177.124.196 port 33964
Sep 02 11:31:49 proxovhgra-2401 sshd-session[2182510]: pam_unix(sshd:session): session closed for user root
Sep 02 11:31:49 proxovhgra-2401 systemd-logind[2052]: Session 1588 logged out. Waiting for processes to exit.
Sep 02 11:31:49 proxovhgra-2401 systemd[1]: session-1588.scope: Deactivated successfully.
Sep 02 11:31:49 proxovhgra-2401 systemd[1]: session-1588.scope: Consumed 669ms CPU time, 152.1M memory peak.
Sep 02 11:31:49 proxovhgra-2401 systemd-logind[2052]: Removed session 1588.
Sep 02 11:31:49 proxovhgra-2401 sshd-session[2182880]: Accepted publickey for root from 10.177.124.196 port 33974 ssh2: RSA SHA256:S0KiVSwoohTW/RDZfgn/uzD7tObadWphxLPO0MuCKtY
Sep 02 11:31:49 proxovhgra-2401 sshd-session[2182880]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0)
Sep 02 11:31:49 proxovhgra-2401 systemd-logind[2052]: New session 1589 of user root.
Sep 02 11:31:49 proxovhgra-2401 systemd[1]: Started session-1589.scope - Session 1589 of User root.
Sep 02 11:31:51 proxovhgra-2401 QEMU[2182719]: kvm: Features 0x308f802c unsupported. Allowed features: 0x1c0010179bfffe7
Sep 02 11:31:51 proxovhgra-2401 QEMU[2182719]: kvm: Failed to load virtio-net:virtio
Sep 02 11:31:51 proxovhgra-2401 QEMU[2182719]: kvm: error while loading state for instance 0x0 of device '0000:00:12.0/virtio-net'
Sep 02 11:31:51 proxovhgra-2401 QEMU[2182719]: kvm: load of migration failed: Operation not permitted
Sep 02 11:31:51 proxovhgra-2401 kernel: tap1001002i1: left allmulticast mode
Sep 02 11:31:51 proxovhgra-2401 kernel: fwbr1001002i1: port 1(tap1001002i1) entered disabled state
Sep 02 11:31:51 proxovhgra-2401 ovs-vsctl[2183017]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln1001002o1
Sep 02 11:31:51 proxovhgra-2401 kernel: fwbr1001002i1: port 2(fwln1001002o1) entered disabled state
Sep 02 11:31:51 proxovhgra-2401 kernel: fwln1001002o1 (unregistering): left allmulticast mode
Sep 02 11:31:51 proxovhgra-2401 kernel: fwln1001002o1 (unregistering): left promiscuous mode
Sep 02 11:31:51 proxovhgra-2401 kernel: fwbr1001002i1: port 2(fwln1001002o1) entered disabled state
Sep 02 11:31:51 proxovhgra-2401 ovs-vsctl[2183020]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln1001002i1
Sep 02 11:31:51 proxovhgra-2401 ovs-vsctl[2183020]: ovs|00002|db_ctl_base|ERR|no port named fwln1001002i1
Sep 02 11:31:51 proxovhgra-2401 ovs-vsctl[2183023]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap1001002i1
Sep 02 11:31:51 proxovhgra-2401 ovs-vsctl[2183023]: ovs|00002|db_ctl_base|ERR|no port named tap1001002i1
Sep 02 11:31:51 proxovhgra-2401 ovs-vsctl[2183025]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln1001002i0
Sep 02 11:31:51 proxovhgra-2401 ovs-vsctl[2183025]: ovs|00002|db_ctl_base|ERR|no port named fwln1001002i0
Sep 02 11:31:51 proxovhgra-2401 ovs-vsctl[2183026]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port tap1001002i0
Sep 02 11:31:51 proxovhgra-2401 systemd[1]: 1001002.scope: Deactivated successfully.
Sep 02 11:31:51 proxovhgra-2401 systemd[1]: 1001002.scope: Consumed 1.927s CPU time, 1.1G memory peak.
 
I've found the following:

  • A machine with its network on vmbr2 fails to migrate, but if I use a bridge created with SDN (without touching the MTU), the migration from Proxmox 8 to Proxmox 9 completes without issues.
  • A machine with its network on vmbr2 migrates correctly between nodes with Proxmox 9.
  • vmbr2 has an MTU of 9000 on all nodes because we are using CEPH.
 
Thanks to all for the information! We are pretty sure the issue is actually caused by:
Leaving the MTU field for a VirtIO vNIC unset now defaults to the bridge MTU, rather than MTU 1500.

@shanreich found
https://bugzilla.redhat.com/show_bug.cgi?id=1449346
so indeed, the migration stream is different and cannot be handled if the host_mtu Parameter is present in the QEMU commandline for one side of the migration, but not on the other. We do not set the Parameter when the MTU from the bridge is 1500.
 
If I set the MTU to 1500 on the network interface over vmbr2, the migration still fails, and the logs show that it attempts to set the MTU to 9000




1756808335188.png

1756808433701.png

The bug report you're referencing is from 2017 and has also been closed. Are you going to investigate anything else, or is the solution simply to create bridges using SDN?
 
Hello, more logs:

Migration errors:

From R750-01 => r750-02

Code:
2025-09-02 14:23:03 use dedicated network address for sending migration traffic (172.16.2.62)
2025-09-02 14:23:03 starting migration of VM 102 to node 'dell-r750-02' (172.16.2.62)
2025-09-02 14:23:03 starting VM 102 on remote node 'dell-r750-02'
2025-09-02 14:23:06 start remote tunnel
2025-09-02 14:23:06 ssh tunnel ver 1
2025-09-02 14:23:06 starting online/live migration on unix:/run/qemu-server/102.migrate
2025-09-02 14:23:06 set migration capabilities
2025-09-02 14:23:06 migration downtime limit: 100 ms
2025-09-02 14:23:06 migration cachesize: 512.0 MiB
2025-09-02 14:23:06 set migration parameters
2025-09-02 14:23:06 start migrate command to unix:/run/qemu-server/102.migrate
2025-09-02 14:23:08 migration active, transferred 537.5 MiB of 4.0 GiB VM-state, 3.3 GiB/s
2025-09-02 14:23:09 average migration speed: 1.3 GiB/s - downtime 52 ms
2025-09-02 14:23:09 migration completed, transferred 585.8 MiB VM-state
2025-09-02 14:23:09 migration status: completed
2025-09-02 14:23:09 ERROR: tunnel replied 'ERR: resume failed - VM 102 not running' to command 'resume 102'
2025-09-02 14:23:09 stopping migration dbus-vmstate helpers
2025-09-02 14:23:09 migrated 0 conntrack state entries
400 Parameter verification failed.
node: VM 102 not running locally on node 'dell-r750-02'
proxy handler failed: pvesh create <api_path> --action <string> [OPTIONS] [FORMAT_OPTIONS]
2025-09-02 14:23:11 failed to stop dbus-vmstate on dell-r750-02: command 'pvesh create /nodes/dell-r750-02/qemu/102/dbus-vmstate --action stop' failed: exit code 2
2025-09-02 14:23:11 flushing conntrack state for guest on source node
2025-09-02 14:23:13 ERROR: migration finished with problems (duration 00:00:10)
TASK ERROR: migration problems

R750-01 network config:
Code:
auto lo
iface lo inet loopback

auto eno12399np0
iface eno12399np0 inet manual
    mtu 9000
    txqueuelen 2000
    post-up /sbin/ip link set dev eno12399np0 txqueuelen 2000

auto eno12409np1
iface eno12409np1 inet manual
    mtu 9000
    txqueuelen 2000
    post-up /sbin/ip link set dev eno12409np1 txqueuelen 2000

iface eno12419np2 inet manual

auto eno12429np3
iface eno12429np3 inet static
    address 10.70.80.1/24
#COROSYNC 2

iface eno8303 inet manual

iface eno8403 inet manual

iface idrac inet manual

auto bond0
iface bond0 inet manual
    bond-slaves eno12399np0 eno12409np1
    bond-miimon 100
    bond-mode 802.3ad
    bond-xmit-hash-policy layer2+3
    mtu 9000
    bond-lacp-rate fast
    txqueuelen 2000
    post-up /sbin/ip link set dev bond0 txqueuelen 2000

auto bond0.101
iface bond0.101 inet manual
    mtu 1500
    txqueuelen 1000
    post-up /sbin/ip link set dev bond0.101 txqueuelen 1000
#MGMT

auto bond0.1722
iface bond0.1722 inet static
    address 172.16.2.61/24
    mtu 9000
    txqueuelen 2000
    post-up /sbin/ip link set dev bond0.1722 txqueuelen 2000
#SAN

auto bond0.1717
iface bond0.1717 inet static
    address 172.17.1.61/24
    txqueuelen 100
    post-up /sbin/ip link set dev bond0.1717 txqueuelen 100
#COROSYNC

auto vmbr101
iface vmbr101 inet static
    address 10.10.1.61/24
    gateway 10.10.1.254
    bridge-ports bond0.101
    bridge-stp off
    bridge-fd 0
    mtu 1500
    txqueuelen 1000
    post-up /sbin/ip link set dev vmbr101 txqueuelen 1000

auto vmbr0
iface vmbr0 inet manual
    bridge-ports bond0
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vids 2-4094
    mtu 9000
    txqueuelen 2000
    post-up /sbin/ip link set dev vmbr0 txqueuelen 2000

source /etc/network/interfaces.d/*

R750-02 network config:
Code:
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto eno12399np0
iface eno12399np0 inet manual
    mtu 9000
    txqueuelen 2000
    post-up /sbin/ip link set dev eno12399np0 txqueuelen 2000

auto eno12409np1
iface eno12409np1 inet manual
    mtu 9000
    txqueuelen 2000
    post-up /sbin/ip link set dev eno12409np1 txqueuelen 2000

iface eno12419np2 inet manual

auto eno12429np3
iface eno12429np3 inet static
    address 10.70.80.2/24
#COROSYNC 2

iface eno8303 inet manual

iface eno8403 inet manual

iface idrac inet manual

auto bond0
iface bond0 inet manual
    bond-slaves eno12399np0 eno12409np1
    bond-miimon 100
    bond-mode 802.3ad
    bond-xmit-hash-policy layer2+3
    mtu 9000
    bond-lacp-rate fast
    txqueuelen 2000
    post-up /sbin/ip link set dev bond0 txqueuelen 2000

auto bond0.101
iface bond0.101 inet manual
    mtu 1500
    txqueuelen 1000
    post-up /sbin/ip link set dev bond0.101 txqueuelen 1000
#MGMT

auto bond0.1722
iface bond0.1722 inet static
    address 172.16.2.62/24
    mtu 9000
    txqueuelen 2000
    post-up /sbin/ip link set dev bond0.1722 txqueuelen 2000
#SAN

auto bond0.1717
iface bond0.1717 inet static
    address 172.17.1.62/24
    txqueuelen 100
    post-up /sbin/ip link set dev bond0.1717 txqueuelen 100
#COROSYNC

auto vmbr101
iface vmbr101 inet static
    address 10.10.1.62/24
    gateway 10.10.1.254
    bridge-ports bond0.101
    bridge-stp off
    bridge-fd 0
    mtu 1500
    txqueuelen 1000
    post-up /sbin/ip link set dev vmbr101 txqueuelen 1000

auto vmbr0
iface vmbr0 inet manual
    bridge-ports bond0
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vids 2-4094
    txqueuelen 2000
    post-up /sbin/ip link set dev vmbr0 txqueuelen 2000

source /etc/network/interfaces.d/*

qm config 102
Code:
agent: 1,fstrim_cloned_disks=1
boot: order=scsi0;ide2;net0
cores: 2
cpu: x86-64-v3
ide2: none,media=cdrom
memory: 4096
meta: creation-qemu=10.0.2,ctime=1755781648
name: VM-Test-3
net0: virtio=BC:24:11:A2:87:96,bridge=vmbr0,firewall=1,tag=120
numa: 0
ostype: l26
scsi0: Synology:102/vm-102-disk-0.qcow2,cache=writethrough,discard=on,iothread=1,size=32G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=b8f8b7d6-c9b2-4d2c-b1b8-3a20ad05a81b
sockets: 1
tags: TEST
vmgenid: 77754240-d325-46c2-b2a7-4c5da8ff0174


r750-02 pveversion:
Code:
pveversion -v
proxmox-ve: 9.0.0 (running kernel: 6.14.11-1-pve)
pve-manager: 9.0.6 (running version: 9.0.6/49c767b70aeb6648)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-6.14.11-1-pve-signed: 6.14.11-1
proxmox-kernel-6.14: 6.14.11-1
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
ceph-fuse: 19.2.3-pve1
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.3.1-1+pve4
ifupdown2: 3.3.0-1+pmx10
intel-microcode: 3.20250512.1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.3
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.6
libpve-cluster-perl: 9.0.6
libpve-common-perl: 9.0.9
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.4
libpve-network-perl: 1.1.6
libpve-rs-perl: 0.10.10
libpve-storage-perl: 9.0.13
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.4-2
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.0.14-1
proxmox-backup-file-restore: 4.0.14-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.1.2
proxmox-kernel-helper: 9.0.4
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.1
proxmox-widget-toolkit: 5.0.5
pve-cluster: 9.0.6
pve-container: 6.0.9
pve-docs: 9.0.8
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.3
pve-firmware: 3.16-4
pve-ha-manager: 5.0.4
pve-i18n: 3.5.2
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
qemu-server: 9.0.19
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.4-pve1

r750-01 pveversion
Code:
pveversion -v
proxmox-ve: 9.0.0 (running kernel: 6.14.11-1-pve)
pve-manager: 9.0.6 (running version: 9.0.6/49c767b70aeb6648)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-6.14.11-1-pve-signed: 6.14.11-1
proxmox-kernel-6.14: 6.14.11-1
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
ceph-fuse: 19.2.3-pve1
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.3.1-1+pve4
ifupdown2: 3.3.0-1+pmx10
intel-microcode: 3.20250512.1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.3
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.6
libpve-cluster-perl: 9.0.6
libpve-common-perl: 9.0.9
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.4
libpve-network-perl: 1.1.6
libpve-rs-perl: 0.10.10
libpve-storage-perl: 9.0.13
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.4-2
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.0.14-1
proxmox-backup-file-restore: 4.0.14-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.1.2
proxmox-kernel-helper: 9.0.4
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.1
proxmox-widget-toolkit: 5.0.5
pve-cluster: 9.0.6
pve-container: 6.0.9
pve-docs: 9.0.8
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.3
pve-firmware: 3.16-4
pve-ha-manager: 5.0.4
pve-i18n: 3.5.2
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
qemu-server: 9.0.19
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.4-pve1

Still can confirm that migration with Q35 works fine.
 
Last edited: