Hello,
I've encountered an issue where conntrack seems not to work when the VM that is being migrated has HA enabled.
I have tested different HA Shutdown policies, but it doesn't seem to make a difference. I've also tested disabling the host firewall, enabling the firewall but allowing everything, as well as having the firewall enabled with our own ruleset.
The VM is in a 6-node cluster w/ qdevice and are using shared NFS storage.
Successful migration occurs when HA is disabled on the VM:
Migration still succeds when HA is enabled, but conntrack seem not to be working.
pveversion -v
Anyone got any good ideas on what the issue could be?
I've encountered an issue where conntrack seems not to work when the VM that is being migrated has HA enabled.
I have tested different HA Shutdown policies, but it doesn't seem to make a difference. I've also tested disabling the host firewall, enabling the firewall but allowing everything, as well as having the firewall enabled with our own ruleset.
The VM is in a 6-node cluster w/ qdevice and are using shared NFS storage.
Successful migration occurs when HA is disabled on the VM:
2026-02-21 14:05:00 use dedicated network address for sending migration traffic (192.168.1.3)
2026-02-21 14:05:01 starting migration of VM 1338 to node 'PVE3' (192.168.1.3)
2026-02-21 14:05:01 starting VM 1338 on remote node 'PVE3'
2026-02-21 14:05:02 start remote tunnel
2026-02-21 14:05:02 ssh tunnel ver 1
2026-02-21 14:05:02 starting online/live migration on unix:/run/qemu-server/1338.migrate
2026-02-21 14:05:02 set migration capabilities
2026-02-21 14:05:02 migration downtime limit: 100 ms
2026-02-21 14:05:02 migration cachesize: 1.0 GiB
2026-02-21 14:05:02 set migration parameters
2026-02-21 14:05:02 start migrate command to unix:/run/qemu-server/1338.migrate
2026-02-21 14:05:03 migration active, transferred 1.4 GiB of 8.0 GiB VM-state, 25.0 GiB/s
2026-02-21 14:05:04 average migration speed: 4.0 GiB/s - downtime 15 ms
2026-02-21 14:05:04 migration completed, transferred 1.4 GiB VM-state
2026-02-21 14:05:04 migration status: completed
2026-02-21 14:05:04 stopping migration dbus-vmstate helpers
2026-02-21 14:05:04 migrated 0 conntrack state entries
2026-02-21 14:05:06 flushing conntrack state for guest on source node
2026-02-21 14:05:08 migration finished successfully (duration 00:00:08)
TASK OK
Migration still succeds when HA is enabled, but conntrack seem not to be working.
task started by HA resource agent
2026-02-21 14:13:47 conntrack state migration not supported or disabled, active connections might get dropped
2026-02-21 14:13:47 use dedicated network address for sending migration traffic (192.168.1.3)
2026-02-21 14:13:47 starting migration of VM 1338 to node 'PVE3' (192.168.1.3)
2026-02-21 14:13:47 starting VM 1338 on remote node 'PVE3'
2026-02-21 14:13:48 start remote tunnel
2026-02-21 14:13:49 ssh tunnel ver 1
2026-02-21 14:13:49 starting online/live migration on unix:/run/qemu-server/1338.migrate
2026-02-21 14:13:49 set migration capabilities
2026-02-21 14:13:49 migration downtime limit: 100 ms
2026-02-21 14:13:49 migration cachesize: 1.0 GiB
2026-02-21 14:13:49 set migration parameters
2026-02-21 14:13:49 start migrate command to unix:/run/qemu-server/1338.migrate
2026-02-21 14:13:50 migration active, transferred 1.3 GiB of 8.0 GiB VM-state, 27.9 GiB/s
2026-02-21 14:13:51 average migration speed: 4.0 GiB/s - downtime 38 ms
2026-02-21 14:13:51 migration completed, transferred 1.4 GiB VM-state
2026-02-21 14:13:51 migration status: completed
2026-02-21 14:13:53 migration finished successfully (duration 00:00:06)
TASK OK
pveversion -v
proxmox-ve: 9.1.0 (running kernel: 6.17.9-1-pve)
pve-manager: 9.1.5 (running version: 9.1.5/80cf92a64bef6889)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-6.17.9-1-pve-signed: 6.17.9-1
proxmox-kernel-6.17: 6.17.9-1
proxmox-kernel-6.17.2-1-pve-signed: 6.17.2-1
amd64-microcode: 3.20251202.1~bpo13+1
ceph-fuse: 19.2.3-pve2
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.4.1-1+pve1
ifupdown2: 3.3.0-1+pmx12
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.2
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.5
libpve-apiclient-perl: 3.4.2
libpve-cluster-api-perl: 9.0.7
libpve-cluster-perl: 9.0.7
libpve-common-perl: 9.1.7
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.5
libpve-network-perl: 1.2.5
libpve-rs-perl: 0.11.4
libpve-storage-perl: 9.1.0
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.5-4
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.1.2-1
proxmox-backup-file-restore: 4.1.2-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.2.1
proxmox-kernel-helper: 9.0.4
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.3
proxmox-widget-toolkit: 5.1.5
pve-cluster: 9.0.7
pve-container: 6.1.1
pve-docs: 9.1.2
pve-edk2-firmware: 4.2025.05-2
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.4
pve-firmware: 3.17-2
pve-ha-manager: 5.1.0
pve-i18n: 3.6.6
pve-qemu-kvm: 10.1.2-6
pve-xtermjs: 5.5.0-3
qemu-server: 9.1.4
smartmontools: 7.4-pve1
spiceterm: 3.4.1
swtpm: 0.8.0+pve3
vncterm: 1.9.1
zfsutils-linux: 2.4.0-pve1
Anyone got any good ideas on what the issue could be?