Hello all!
I have a 3-node Proxmox 9.1.5 cluster with Ceph RBD storage. I want to implement a sort of "shutdown HA" state for VMs in the event of a cluster node failure. Rather than using the HA feature to automatically restart VMs on other available nodes when the host running them fails, I would simply like the affected VMs to appear in a clean, available, shutdown state, with the ability to be manually started if needed.
Normally, without HA enabled, affected VMs become unreachable via the GUI and eventually show up in an unknown state marked with a '?'. I would like them to appear simply as shutdown / "migrated" to one of the other 2 nodes. I was hoping this could be achieved by applying an HA rule with the desired state set to "Shutdown," but that does not work. Interestingly, a similar approach seems to work for templates, since template VMs are not meant to run and are only used for cloning.
Any suggestions would be appreciated. I guess I could figure out how to implement some custom scripts or quirky hacks, that polls node availability and then moves around node config files on fuse-shared mount inbetween nodes, but I would like to avoid that, hoping this feature is available as a standard option.
Thanks in advance.
I have a 3-node Proxmox 9.1.5 cluster with Ceph RBD storage. I want to implement a sort of "shutdown HA" state for VMs in the event of a cluster node failure. Rather than using the HA feature to automatically restart VMs on other available nodes when the host running them fails, I would simply like the affected VMs to appear in a clean, available, shutdown state, with the ability to be manually started if needed.
Normally, without HA enabled, affected VMs become unreachable via the GUI and eventually show up in an unknown state marked with a '?'. I would like them to appear simply as shutdown / "migrated" to one of the other 2 nodes. I was hoping this could be achieved by applying an HA rule with the desired state set to "Shutdown," but that does not work. Interestingly, a similar approach seems to work for templates, since template VMs are not meant to run and are only used for cloning.
Any suggestions would be appreciated. I guess I could figure out how to implement some custom scripts or quirky hacks, that polls node availability and then moves around node config files on fuse-shared mount inbetween nodes, but I would like to avoid that, hoping this feature is available as a standard option.
Thanks in advance.