After updating Proxmox 8.3.3, HA OS stopped booting

bulbazavr

New Member
Jan 30, 2025
4
0
1
Hello! Home Assistant OS had been installed for over a year. Today, I updated Proxmox, and after rebooting, the container won’t start—it stops immediately.

Code:
Requesting HA start for VM 100
Use of uninitialized value in string eq at /usr/share/perl5/PVE/API2/HA/Resources.pm line 40.
TASK OK

code_language.shell:
Jan 30 20:20:42 proxmox pvescheduler[966]: starting server
Jan 30 20:20:42 proxmox systemd[1]: Started pvescheduler.service - Proxmox VE scheduler.
Jan 30 20:20:42 proxmox systemd[1]: Reached target multi-user.target - Multi-User System.
Jan 30 20:20:42 proxmox systemd[1]: Reached target graphical.target - Graphical Interface.
Jan 30 20:20:42 proxmox systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP...
Jan 30 20:20:42 proxmox systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 30 20:20:42 proxmox systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP.
Jan 30 20:20:42 proxmox systemd[1]: Startup finished in 3.969s (firmware) + 7.777s (loader) + 2.252s (kernel) + 11.601s (userspace) = 25.602s.
Jan 30 20:20:46 proxmox chronyd[750]: Selected source 94.100.180.133 (2.debian.pool.ntp.org)
Jan 30 20:20:46 proxmox chronyd[750]: System clock TAI offset set to 37 seconds
Jan 30 20:21:03 proxmox systemd[1]: systemd-fsckd.service: Deactivated successfully.
Jan 30 20:21:04 proxmox systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 30 20:21:10 proxmox pvedaemon[945]: Use of uninitialized value in string eq at /usr/share/perl5/PVE/API2/HA/Resources.pm line 40.
Jan 30 20:21:14 proxmox pvedaemon[947]: <root@pam> starting task UPID:proxmox:0000044C:000011BA:679BB50A:hastart:100:root@pam:
Jan 30 20:21:15 proxmox pvedaemon[947]: <root@pam> end task UPID:proxmox:0000044C:000011BA:679BB50A:hastart:100:root@pam: OK
Jan 30 20:21:59 proxmox pvedaemon[945]: <root@pam> starting task UPID:proxmox:00000531:0000232F:679BB537:hastart:100:root@pam:
Jan 30 20:22:00 proxmox pvedaemon[945]: <root@pam> end task UPID:proxmox:00000531:0000232F:679BB537:hastart:100:root@pam: OK

Снимок экрана 2025-01-30 203039.pngСнимок экрана 2025-01-30 203154.png
 
The issue was here. But why didn’t they start automatically, and why is there no error message when HA OS starts?

Снимок экрана 2025-01-30 204348.png
 
It appears you are not running a cluster but rather a single node. Those 2 services are therefore disabled, as you can see in the next column, this is how it should be. What are you doing with HA (High Availability) start?:
Requesting HA start for VM 100

It would appear that either this node was once clustered or that you added this VM 100 to HA resources.
But I'm not sure why you would do that? AFAIK, there is no gain / difference on a single node for High Availability.
Maybe I'm missing something.....
 
unexpected shutdown.
I'm not sure of your use-case for this. If the (single) host node got powered down or froze/became inaccessible, that HA resource is not going to help you at all, & when the node will come back on / be rebooted, your VM is anyway set to start-on-boot. If you powered down just the VM yourself - you probably have reason why you did this (maintenance etc.) & would not want it to just start again. If the use-case is because the VM got frozen/locked up, yet again HA is not going todo anything, since as far as the host is concerned the VM is still running. The only scenario would be if it the KVM process got OOM-killed (out-of-memory) - a scenario that should NOT happen - and something is wrong with your host/guest setup that needs fixing.

In short HA resources are not a replacement for a proper monitoring/watchdog system on your VM. I would not be using them myself on a single node.
 
The virtual machine running Home Assistant crashes ~every three months with something like a "Segmentation fault." This can happen at night, leaving the entire smart home offline until I manually restart it. That's why I was looking for ways to have Proxmox automatically "revive" the crashed VM.