Proxmox 9.1.2 HA migration issues when rebooting node.

randymartin

New Member
Jan 15, 2025
4
0
1
When rebooting a node in order to install update, the node goes into maintenance and tries to live migrate the vm's.
However is hangs with the following error:

task started by HA resource agent

2025-12-15 23:53:05 conntrack state migration not supported or disabled, active connections might get dropped
2025-12-15 23:53:05 ERROR: migration aborted (duration 00:00:00): org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
TASK ERROR: migration aborted

I run a 3 node proxmox HA ceph cluster.
Manually live migrating the VM's work without any issues.

Any ideas?
 
Maybe try updating your nodes; the release notes of today's qemu-server 9.1.2 update includes:
* dbus-vmstate: fix method call on dbus object resolving to wrong instance.

* migrate: remove left-over dbus-vmstate instance when migrating without
conntrack state.

EDIT: This is available on the no-subscription repository: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_package_repositories
EDIT2: Maybe the update caused a regression? Try downgrading to versoin 9.1.1 of qemu-server?
 
Last edited:
  • Like
Reactions: randymartin
I have the same problem since i updated today, you need to set the manual maintenance mode before rebooting. for some reasons migrating with this works fine. when its stuck in the faulty state it helps to shutdown the vm (not in the proxmox webgui, but on the host itself) to get out of this state.
just wanted to share my experiences as a workaround until its fixed.
 
Last edited:
  • Like
Reactions: randymartin
I have the same problem since i updated today, you need to set the manual maintenance mode before rebooting. for some reasons migrating with this works fine. when its stuck in the faulty state it helps to shutdown the vm (not in the proxmox webgui, but on the host itself) to get out of this state.
just wanted to share my experiences as a workaround until its fixed.
I have exactly the same behavior myself.
 
qemu-server 9.1.3 available on pve-test now should fix this issue. affected VMs do need to be stopped and started (or live-migrated *outside of a node reboot/shutdown!) for the fix to take affect.