Manual VM Migration in Proxmox HA Setup with Failed ZFS

kieselbert

Member
Jul 6, 2021
10
0
6
41
I have a setup with 3 nodes. On each node, there is, among other things, a ZFS that is named the same everywhere. HA is enabled for the VMs, and there are replication jobs that keep the content identical across the ZFSs. So far, so good.
Now, on one node (where the VM was also running), the ZFS has failed, unfortunately completely. I think the controller is done for. What does HA do then? It tries to migrate the VM to another node, and of course, it fails.
Now I'm wondering what benefit HA even brings me. But that's not the question I want to ask. Instead, I would like to know how I can manually move the VM to another node. There is the same drive there, the replication is 2 hours old (completely sufficient), and I just want to start the VM up again.
What is the "normal" way to do this? It must be provided for in such a setup.

Proxmox 7.4-17
 
If you have a replication job up and running and ha enabled for this vm, then your vm should automatically start on the other node.
Its important that pooi names are identical on both systems.

Theres something you have done wrong as it should work as I described. If you want some help, post your pvereports if there is no information in it, that you dont wanna share online with everybody.

Can you post the error when the vm wants to start on a different node? There must be some error in the logs.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!