Update hangs at "processing triggers for pve-ha-manager"


Well-Known Member
Feb 28, 2017
I manage a PVE 5.3 cluster on 4 nodes. This is running in Unicast mode on OVH vRack system.

This morning I decided to install the latest patches for the systems and while two of the nodes completed fine, two other nodes hang at the step "Processing triggers for pve-ha-manager (2.0-6)". They have now been hanging there for the past 6 hours.
Also restarting the pve-cluster service seems to hang. So the two things might be related.
I have checked /var/log/daemon.log and here no errors are reported.
Looking at /var/log/dpkg.log, the last line says "status half-configured pve-ha-manager:amd64 2.0-6"

Are there any other logfiles that I can look at to figure out why this last step is hanging?
* Please check the systems' journal (`journalctl -r` ) - usually there you get the most information
* Else the output of `ps auxwf` can give some hints where the dpkg process hangs (usually some child being blocked on I/O)
* Is your cluster healthy (`pvecm status`, `ls /etc/pve/`, `touch /etc/pve/tmpfile`)?
Nothing seems obviously blocking I/O.
Using pvecm status, shows that everything is healthy.

What would happen if I forced a host reboot? Would that break the cluster?
- if everything is fine again you could try:
* `apt install -f`
* `dpkg --configure -a`

first and check its output

rebooting itself should not harm the cluster - but if you have a problem with the cluster-network it could happen that your cluster does not get quorate, and the guests cannot start
Both nodes are still stuck at the same spot. Just today, we have added a new node to the cluster and qourate for the new node went fine, so that cluster seems to be OK.
I will attempt a forced reboot of one node tonight to see if it breaks completely or if it comes back up with no problems. The nodes also seem to accept cluster level changes fine when checking the logs through a different SSH connection.
Tried to manually shutdown every running VM by connecting to them using either SSH or RDP.
Then initiated a soft reboot from the node using SSH. At some point it went into a loop with the running sessions for root and some PVE sync service.
Forced a reset using IPMI and server came up with no problems. Will attempt the same thing with the other host tomorrow


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!