Live VM Migration fails

from a node that fails more info:
Code:
root@pve5:[~]:# pveversion -v
proxmox-ve: 9.0.0 (running kernel: 6.14.8-2-pve)
pve-manager: 9.0.5 (running version: 9.0.5/9c5600b249dbfd2f)
proxmox-kernel-helper: 9.0.3
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
proxmox-kernel-6.14: 6.14.8-2
proxmox-kernel-6.8.12-13-pve-signed: 6.8.12-13
proxmox-kernel-6.8: 6.8.12-13
proxmox-kernel-6.8.12-9-pve-signed: 6.8.12-9
ceph: 19.2.3-pve1
ceph-fuse: 19.2.3-pve1
corosync: 3.1.9-pve2
criu: 4.1.1-1
dnsmasq: 2.91-1
frr-pythontools: 10.3.1-1+pve4
ifupdown2: 3.3.0-1+pmx9
intel-microcode: 3.20250512.1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.3
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.6
libpve-cluster-perl: 9.0.6
libpve-common-perl: 9.0.9
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.4
libpve-network-perl: 1.1.6
libpve-rs-perl: 0.10.7
libpve-storage-perl: 9.0.13
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.4-2
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.0.14-1
proxmox-backup-file-restore: 4.0.14-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.1.1
proxmox-kernel-helper: 9.0.3
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.0
proxmox-widget-toolkit: 5.0.5
pve-cluster: 9.0.6
pve-container: 6.0.9
pve-docs: 9.0.8
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.3
pve-firmware: 3.16-3
pve-ha-manager: 5.0.4
pve-i18n: 3.5.2
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
qemu-server: 9.0.16
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.3-pve1
 
from a node that is okay to migrate live to:

Code:
root@pve2:[~]:# pveversion -v
proxmox-ve: 9.0.0 (running kernel: 6.14.8-2-pve)
pve-manager: 9.0.5 (running version: 9.0.5/9c5600b249dbfd2f)
proxmox-kernel-helper: 9.0.3
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
proxmox-kernel-6.14: 6.14.8-2
proxmox-kernel-6.8.12-13-pve-signed: 6.8.12-13
proxmox-kernel-6.8: 6.8.12-13
ceph: 19.2.3-pve1
ceph-fuse: 19.2.3-pve1
corosync: 3.1.9-pve2
criu: 4.1.1-1
dnsmasq: 2.91-1
frr-pythontools: 10.3.1-1+pve4
ifupdown2: 3.3.0-1+pmx9
intel-microcode: 3.20250512.1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.3
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.6
libpve-cluster-perl: 9.0.6
libpve-common-perl: 9.0.9
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.4
libpve-network-perl: 1.1.6
libpve-rs-perl: 0.10.7
libpve-storage-perl: 9.0.13
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.4-2
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.0.14-1
proxmox-backup-file-restore: 4.0.14-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.1.1
proxmox-kernel-helper: 9.0.3
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.0
proxmox-widget-toolkit: 5.0.5
pve-cluster: 9.0.6
pve-container: 6.0.9
pve-docs: 9.0.8
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.3
pve-firmware: 3.16-3
pve-ha-manager: 5.0.4
pve-i18n: 3.5.2
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
pve-zsync: 2.4.0
qemu-server: 9.0.16
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.3-pve1
 
Hello,
I have the same issue. Migration of a VM with Q35 works fine, but a VM with Default (i440fx) ends with errors. Below are the migration errors and my Proxmox versions.

VM with i440fx (v10) – migration error:
Code:
2025-08-23 11:37:02 use dedicated network address for sending migration traffic (10.10.1.61)
2025-08-23 11:37:02 starting migration of VM 101 to node 'dell-r750-01' (10.10.1.61)
2025-08-23 11:37:02 starting VM 101 on remote node 'dell-r750-01'
2025-08-23 11:37:04 start remote tunnel
2025-08-23 11:37:05 ssh tunnel ver 1
2025-08-23 11:37:05 starting online/live migration on unix:/run/qemu-server/101.migrate
2025-08-23 11:37:05 set migration capabilities
2025-08-23 11:37:05 migration downtime limit: 100 ms
2025-08-23 11:37:05 migration cachesize: 512.0 MiB
2025-08-23 11:37:05 set migration parameters
2025-08-23 11:37:05 start migrate command to unix:/run/qemu-server/101.migrate
2025-08-23 11:37:06 migration active, transferred 282.7 MiB of 4.0 GiB VM-state, 473.1 MiB/s
2025-08-23 11:37:07 average migration speed: 2.0 GiB/s - downtime 49 ms
2025-08-23 11:37:07 migration completed, transferred 588.8 MiB VM-state
2025-08-23 11:37:07 migration status: completed
2025-08-23 11:37:07 ERROR: tunnel replied 'ERR: resume failed - VM 101 qmp command 'query-status' failed - client closed connection' to command 'resume 101'
2025-08-23 11:37:07 stopping migration dbus-vmstate helpers
2025-08-23 11:37:07 migrated 0 conntrack state entries
400 Parameter verification failed.
node: VM 101 not running locally on node 'dell-r750-01'
proxy handler failed: pvesh create <api_path> --action <string> [OPTIONS] [FORMAT_OPTIONS]
2025-08-23 11:37:10 failed to stop dbus-vmstate on dell-r750-01: command 'pvesh create /nodes/dell-r750-01/qemu/101/dbus-vmstate --action stop' failed: exit code 2
2025-08-23 11:37:10 flushing conntrack state for guest on source node
2025-08-23 11:37:13 ERROR: migration finished with problems (duration 00:00:12)
TASK ERROR: migration problems

VM with Q35 – migration works fine:
Code:
2025-08-23 11:36:21 use dedicated network address for sending migration traffic (10.10.1.61)
2025-08-23 11:36:21 starting migration of VM 100 to node 'dell-r750-01' (10.10.1.61)
2025-08-23 11:36:21 starting VM 100 on remote node 'dell-r750-01'
2025-08-23 11:36:23 start remote tunnel
2025-08-23 11:36:24 ssh tunnel ver 1
2025-08-23 11:36:24 starting online/live migration on unix:/run/qemu-server/100.migrate
2025-08-23 11:36:24 set migration capabilities
2025-08-23 11:36:24 migration downtime limit: 100 ms
2025-08-23 11:36:24 migration cachesize: 512.0 MiB
2025-08-23 11:36:24 set migration parameters
2025-08-23 11:36:24 start migrate command to unix:/run/qemu-server/100.migrate
2025-08-23 11:36:25 migration active, transferred 307.2 MiB of 4.0 GiB VM-state, 419.1 MiB/s
2025-08-23 11:36:26 average migration speed: 2.0 GiB/s - downtime 88 ms
2025-08-23 11:36:26 migration completed, transferred 606.9 MiB VM-state
2025-08-23 11:36:26 migration status: completed
2025-08-23 11:36:28 stopping migration dbus-vmstate helpers
2025-08-23 11:36:28 migrated 0 conntrack state entries
2025-08-23 11:36:30 flushing conntrack state for guest on source node
2025-08-23 11:36:33 migration finished successfully (duration 00:00:13)
TASK OK

Proxmox version on source node:
Code:
root@dell-r740-03:~# pveversion -v
proxmox-ve: 9.0.0 (running kernel: 6.14.8-2-pve)
pve-manager: 9.0.5 (running version: 9.0.5/9c5600b249dbfd2f)
proxmox-kernel-helper: 9.0.3
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
proxmox-kernel-6.14: 6.14.8-2
ceph-fuse: 19.2.3-pve1
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.3.1-1+pve4
ifupdown2: 3.3.0-1+pmx9
intel-microcode: 3.20250512.1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.3
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.6
libpve-cluster-perl: 9.0.6
libpve-common-perl: 9.0.9
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.4
libpve-network-perl: 1.1.6
libpve-rs-perl: 0.10.10
libpve-storage-perl: 9.0.13
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.4-2
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.0.14-1
proxmox-backup-file-restore: 4.0.14-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.1.1
proxmox-kernel-helper: 9.0.3
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.0
proxmox-widget-toolkit: 5.0.5
pve-cluster: 9.0.6
pve-container: 6.0.9
pve-docs: 9.0.8
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.3
pve-firmware: 3.16-3
pve-ha-manager: 5.0.4
pve-i18n: 3.5.2
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
qemu-server: 9.0.18
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.3-pve1

Proxmox version on target node:
Code:
root@dell-r750-01:~# pveversion -v
proxmox-ve: 9.0.0 (running kernel: 6.14.8-2-pve)
pve-manager: 9.0.5 (running version: 9.0.5/9c5600b249dbfd2f)
proxmox-kernel-helper: 9.0.3
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
proxmox-kernel-6.14: 6.14.8-2
ceph-fuse: 19.2.3-pve1
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.3.1-1+pve4
ifupdown2: 3.3.0-1+pmx9
intel-microcode: 3.20250512.1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.3
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.6
libpve-cluster-perl: 9.0.6
libpve-common-perl: 9.0.9
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.4
libpve-network-perl: 1.1.6
libpve-rs-perl: 0.10.10
libpve-storage-perl: 9.0.13
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.4-2
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.0.14-1
proxmox-backup-file-restore: 4.0.14-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.1.1
proxmox-kernel-helper: 9.0.3
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.0
proxmox-widget-toolkit: 5.0.5
pve-cluster: 9.0.6
pve-container: 6.0.9
pve-docs: 9.0.8
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.3
pve-firmware: 3.16-3
pve-ha-manager: 5.0.4
pve-i18n: 3.5.2
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
qemu-server: 9.0.18
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.3-pve1

It seems like live migration fails only with i440fx VMs, while Q35 VMs migrate without issues.

Could this be a bug in the current Proxmox version?
 
Last edited:
  • Like
Reactions: waltar
I solved this . the node that would have migrations fail to and from needed an adjustment in /etc/network/interfaces .

the other 4 nodes have vmbr3 set as:
Code:
auto vmbr3
iface vmbr3 inet static
        address 10.1.10.7/24
        gateway 10.1.10.1
        bridge-ports bond3
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-250
        mtu 9000

the node with migration issues had:
Code:
auto vmbr3
iface vmbr3 inet static
        address 10.1.10.15/24
        gateway 10.1.10.1
        bridge-ports bond3     
        bridge-stp off
        bridge-fd 0

putting the same settings to the fail node and reboot solved our migration issues
 
  • Like
Reactions: fiona and waltar
Hello,
I have the same issue. Migration of a VM with Q35 works fine, but a VM with Default (i440fx) ends with errors. Below are the migration errors and my Proxmox versions.

VM with i440fx (v10) – migration error:
Code:
2025-08-23 11:37:02 use dedicated network address for sending migration traffic (10.10.1.61)
2025-08-23 11:37:02 starting migration of VM 101 to node 'dell-r750-01' (10.10.1.61)
2025-08-23 11:37:02 starting VM 101 on remote node 'dell-r750-01'
2025-08-23 11:37:04 start remote tunnel
2025-08-23 11:37:05 ssh tunnel ver 1
2025-08-23 11:37:05 starting online/live migration on unix:/run/qemu-server/101.migrate
2025-08-23 11:37:05 set migration capabilities
2025-08-23 11:37:05 migration downtime limit: 100 ms
2025-08-23 11:37:05 migration cachesize: 512.0 MiB
2025-08-23 11:37:05 set migration parameters
2025-08-23 11:37:05 start migrate command to unix:/run/qemu-server/101.migrate
2025-08-23 11:37:06 migration active, transferred 282.7 MiB of 4.0 GiB VM-state, 473.1 MiB/s
2025-08-23 11:37:07 average migration speed: 2.0 GiB/s - downtime 49 ms
2025-08-23 11:37:07 migration completed, transferred 588.8 MiB VM-state
2025-08-23 11:37:07 migration status: completed
2025-08-23 11:37:07 ERROR: tunnel replied 'ERR: resume failed - VM 101 qmp command 'query-status' failed - client closed connection' to command 'resume 101'
2025-08-23 11:37:07 stopping migration dbus-vmstate helpers
2025-08-23 11:37:07 migrated 0 conntrack state entries
400 Parameter verification failed.
node: VM 101 not running locally on node 'dell-r750-01'
proxy handler failed: pvesh create <api_path> --action <string> [OPTIONS] [FORMAT_OPTIONS]
2025-08-23 11:37:10 failed to stop dbus-vmstate on dell-r750-01: command 'pvesh create /nodes/dell-r750-01/qemu/101/dbus-vmstate --action stop' failed: exit code 2
2025-08-23 11:37:10 flushing conntrack state for guest on source node
2025-08-23 11:37:13 ERROR: migration finished with problems (duration 00:00:12)
TASK ERROR: migration problems

VM with Q35 – migration works fine:
Code:
2025-08-23 11:36:21 use dedicated network address for sending migration traffic (10.10.1.61)
2025-08-23 11:36:21 starting migration of VM 100 to node 'dell-r750-01' (10.10.1.61)
2025-08-23 11:36:21 starting VM 100 on remote node 'dell-r750-01'
2025-08-23 11:36:23 start remote tunnel
2025-08-23 11:36:24 ssh tunnel ver 1
2025-08-23 11:36:24 starting online/live migration on unix:/run/qemu-server/100.migrate
2025-08-23 11:36:24 set migration capabilities
2025-08-23 11:36:24 migration downtime limit: 100 ms
2025-08-23 11:36:24 migration cachesize: 512.0 MiB
2025-08-23 11:36:24 set migration parameters
2025-08-23 11:36:24 start migrate command to unix:/run/qemu-server/100.migrate
2025-08-23 11:36:25 migration active, transferred 307.2 MiB of 4.0 GiB VM-state, 419.1 MiB/s
2025-08-23 11:36:26 average migration speed: 2.0 GiB/s - downtime 88 ms
2025-08-23 11:36:26 migration completed, transferred 606.9 MiB VM-state
2025-08-23 11:36:26 migration status: completed
2025-08-23 11:36:28 stopping migration dbus-vmstate helpers
2025-08-23 11:36:28 migrated 0 conntrack state entries
2025-08-23 11:36:30 flushing conntrack state for guest on source node
2025-08-23 11:36:33 migration finished successfully (duration 00:00:13)
TASK OK

Proxmox version on source node:
Code:
root@dell-r740-03:~# pveversion -v
proxmox-ve: 9.0.0 (running kernel: 6.14.8-2-pve)
pve-manager: 9.0.5 (running version: 9.0.5/9c5600b249dbfd2f)
proxmox-kernel-helper: 9.0.3
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
proxmox-kernel-6.14: 6.14.8-2
ceph-fuse: 19.2.3-pve1
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.3.1-1+pve4
ifupdown2: 3.3.0-1+pmx9
intel-microcode: 3.20250512.1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.3
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.6
libpve-cluster-perl: 9.0.6
libpve-common-perl: 9.0.9
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.4
libpve-network-perl: 1.1.6
libpve-rs-perl: 0.10.10
libpve-storage-perl: 9.0.13
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.4-2
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.0.14-1
proxmox-backup-file-restore: 4.0.14-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.1.1
proxmox-kernel-helper: 9.0.3
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.0
proxmox-widget-toolkit: 5.0.5
pve-cluster: 9.0.6
pve-container: 6.0.9
pve-docs: 9.0.8
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.3
pve-firmware: 3.16-3
pve-ha-manager: 5.0.4
pve-i18n: 3.5.2
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
qemu-server: 9.0.18
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.3-pve1

Proxmox version on target node:
Code:
root@dell-r750-01:~# pveversion -v
proxmox-ve: 9.0.0 (running kernel: 6.14.8-2-pve)
pve-manager: 9.0.5 (running version: 9.0.5/9c5600b249dbfd2f)
proxmox-kernel-helper: 9.0.3
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
proxmox-kernel-6.14: 6.14.8-2
ceph-fuse: 19.2.3-pve1
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.3.1-1+pve4
ifupdown2: 3.3.0-1+pmx9
intel-microcode: 3.20250512.1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.3
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.6
libpve-cluster-perl: 9.0.6
libpve-common-perl: 9.0.9
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.4
libpve-network-perl: 1.1.6
libpve-rs-perl: 0.10.10
libpve-storage-perl: 9.0.13
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.4-2
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.0.14-1
proxmox-backup-file-restore: 4.0.14-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.1.1
proxmox-kernel-helper: 9.0.3
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.0
proxmox-widget-toolkit: 5.0.5
pve-cluster: 9.0.6
pve-container: 6.0.9
pve-docs: 9.0.8
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.3
pve-firmware: 3.16-3
pve-ha-manager: 5.0.4
pve-i18n: 3.5.2
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
qemu-server: 9.0.18
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.3-pve1

It seems like live migration fails only with i440fx VMs, while Q35 VMs migrate without issues.

Could this be a bug in the current Proxmox version?
Hi, i have the same error

Migration of a VM with Q35 works fine, but a VM with Default (i440fx) ends with errors.

Code:
()
2025-09-01 15:25:31 starting migration of VM 146245 to node 'xxxx' (xxxx)
2025-09-01 15:25:31 starting VM 146245 on remote node 'xxxxx'
2025-09-01 15:25:33 start remote tunnel
2025-09-01 15:25:34 ssh tunnel ver 1
2025-09-01 15:25:34 starting online/live migration on unix:/run/qemu-server/146245.migrate
2025-09-01 15:25:34 set migration capabilities
2025-09-01 15:25:34 migration downtime limit: 100 ms
2025-09-01 15:25:34 migration cachesize: 512.0 MiB
2025-09-01 15:25:34 set migration parameters
2025-09-01 15:25:34 start migrate command to unix:/run/qemu-server/146245.migrate
2025-09-01 15:25:35 migration active, transferred 307.7 MiB of 4.0 GiB VM-state, 652.5 MiB/s
2025-09-01 15:25:36 migration active, transferred 867.3 MiB of 4.0 GiB VM-state, 647.1 MiB/s
2025-09-01 15:25:37 average migration speed: 1.3 GiB/s - downtime 65 ms
2025-09-01 15:25:37 migration completed, transferred 1.6 GiB VM-state
2025-09-01 15:25:37 migration status: completed
2025-09-01 15:25:37 ERROR: tunnel replied 'ERR: resume failed - VM 146245 qmp command 'query-status' failed - client closed connection' to command 'resume 146245'
VM quit/powerdown failed - terminating now with SIGTERM
2025-09-01 15:25:45 ERROR: migration finished with problems (duration 00:00:14)
TASK ERROR: migration problems


Code:
2025-09-01 15:24:39 conntrack state migration not supported or disabled, active connections might get dropped
2025-09-01 15:24:40 starting migration of VM 146245 to node 'xxx' ()
2025-09-01 15:24:40 starting VM 146245 on remote node 'xxx'
2025-09-01 15:24:41 start remote tunnel
2025-09-01 15:24:42 ssh tunnel ver 1
2025-09-01 15:24:42 starting online/live migration on unix:/run/qemu-server/146245.migrate
2025-09-01 15:24:42 set migration capabilities
2025-09-01 15:24:42 migration downtime limit: 100 ms
2025-09-01 15:24:42 migration cachesize: 512.0 MiB
2025-09-01 15:24:42 set migration parameters
2025-09-01 15:24:42 start migrate command to unix:/run/qemu-server/146245.migrate
2025-09-01 15:24:43 migration active, transferred 490.4 MiB of 4.0 GiB VM-state, 575.3 MiB/s
2025-09-01 15:24:44 migration active, transferred 1013.9 MiB of 4.0 GiB VM-state, 521.9 MiB/s
2025-09-01 15:24:45 migration active, transferred 1.6 GiB of 4.0 GiB VM-state, 752.5 MiB/s
2025-09-01 15:24:46 average migration speed: 1.0 GiB/s - downtime 26 ms
2025-09-01 15:24:46 migration completed, transferred 1.9 GiB VM-state
2025-09-01 15:24:46 migration status: completed
2025-09-01 15:24:50 migration finished successfully (duration 00:00:11)
TASK OK

Source node
Code:
pveversion -v
proxmox-ve: 8.4.0 (running kernel: 6.8.4-3-pve)
pve-manager: 8.4.12 (running version: 8.4.12/c2ea8261d32a5020)
proxmox-kernel-helper: 8.1.4
proxmox-kernel-6.8.12-14-pve-signed: 6.8.12-14
proxmox-kernel-6.8: 6.8.12-14
proxmox-kernel-6.8.12-13-pve-signed: 6.8.12-13
proxmox-kernel-6.8.12-11-pve-signed: 6.8.12-11
proxmox-kernel-6.8.12-10-pve-signed: 6.8.12-10
proxmox-kernel-6.8.12-8-pve-signed: 6.8.12-8
proxmox-kernel-6.8.12-6-pve-signed: 6.8.12-6
proxmox-kernel-6.8.12-5-pve-signed: 6.8.12-5
proxmox-kernel-6.8.12-4-pve-signed: 6.8.12-4
proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
ceph: 19.2.2-pve1~bpo12+1
ceph-fuse: 19.2.2-pve1~bpo12+1
corosync: 3.1.9-pve1
criu: 3.17.1-2+deb12u1
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.30-pve2
libproxmox-acme-perl: 1.6.0
libproxmox-backup-qemu0: 1.5.2
libproxmox-rs-perl: 0.3.5
libpve-access-control: 8.2.2
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.1.2
libpve-cluster-perl: 8.1.2
libpve-common-perl: 8.3.4
libpve-guest-common-perl: 5.2.2
libpve-http-server-perl: 5.2.2
libpve-network-perl: 0.11.2
libpve-rs-perl: 0.9.4
libpve-storage-perl: 8.3.7
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.6.0-2
openvswitch-switch: 3.1.0-2+deb12u1
proxmox-backup-client: 3.4.6-1
proxmox-backup-file-restore: 3.4.6-1
proxmox-backup-restore-image: 0.7.0
proxmox-firewall: 0.7.1
proxmox-kernel-helper: 8.1.4
proxmox-mail-forward: 0.3.3
proxmox-mini-journalreader: 1.5
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.13
pve-cluster: 8.1.2
pve-container: 5.3.0
pve-docs: 8.4.1
pve-edk2-firmware: 4.2025.02-4~bpo12+1
pve-esxi-import-tools: 0.7.4
pve-firewall: 5.1.2
pve-firmware: 3.16-3
pve-ha-manager: 4.0.7
pve-i18n: 3.4.5
pve-qemu-kvm: 9.2.0-7
pve-xtermjs: 5.5.0-2
pve-zsync: 2.3.1
qemu-server: 8.4.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.8-pve1

Target node
Code:
pveversion -v
proxmox-ve: 9.0.0 (running kernel: 6.14.11-1-pve)
pve-manager: 9.0.6 (running version: 9.0.6/49c767b70aeb6648)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-6.14.11-1-pve-signed: 6.14.11-1
proxmox-kernel-6.14: 6.14.11-1
proxmox-kernel-6.8.12-14-pve-signed: 6.8.12-14
proxmox-kernel-6.8: 6.8.12-14
proxmox-kernel-6.8.12-13-pve-signed: 6.8.12-13
proxmox-kernel-6.8.12-11-pve-signed: 6.8.12-11
proxmox-kernel-6.8.12-10-pve-signed: 6.8.12-10
proxmox-kernel-6.8.12-8-pve-signed: 6.8.12-8
proxmox-kernel-6.8.12-6-pve-signed: 6.8.12-6
proxmox-kernel-6.8.12-5-pve-signed: 6.8.12-5
proxmox-kernel-6.8.12-4-pve-signed: 6.8.12-4
proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
amd64-microcode: 3.20250311.1
ceph: 19.2.3-pve1
ceph-fuse: 19.2.3-pve1
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.3.1-1+pve4
ifupdown2: 3.3.0-1+pmx10
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.3
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.6
libpve-cluster-perl: 9.0.6
libpve-common-perl: 9.0.9
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.4
libpve-network-perl: 1.1.6
libpve-rs-perl: 0.10.10
libpve-storage-perl: 9.0.13
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.4-2
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
openvswitch-switch: 3.5.0-1+b1
proxmox-backup-client: 4.0.14-1
proxmox-backup-file-restore: 4.0.14-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.1.2
proxmox-kernel-helper: 9.0.4
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.1
proxmox-widget-toolkit: 5.0.5
pve-cluster: 9.0.6
pve-container: 6.0.9
pve-docs: 9.0.8
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.3
pve-firmware: 3.16-4
pve-ha-manager: 5.0.4
pve-i18n: 3.5.2
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
pve-zsync: 2.4.0
qemu-server: 9.0.19
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.4-pve1
 
Hi,

@RobFantini @dmembibre @jjadczak could you share the configuration of an affected VM for completeness, i.e. qm config <ID> replacing <ID> with the actual ID of the VM? What is the error message in the target node's system journal at the time of the failed migrations?

@dmembibre @jjadczak Please also share the /etc/network/interfaces configuration from both source and target node.