I run a cluster with three nodes (quorum is given) normally. The VM's are synchronized with DRBD or on NAS share (uncritical VMs). HA groups aren't used at the moment (I'm still in trial and learning status with this functionality.)
Yesterday I had a crash on node2 caused by a driver. I was clear that I needed some time to restore it. I thought that I should be able to migrate (offline) my crashed VM's to node1 or node3. But fail: PVE told me that there is no connection to node2! This is clear but I know that path "/etc/pve" is available on all nodes synchronized.
As consequence I logged in the console of node3 and I moved vm's conf files from node2 to node3 manually. Afterwards I was able to start VM's on node 3. Even after restored node2 there were no issue because node2 got the latest pve configs from corosync and manually "migrated" VMs didn't start there correctly.
Why isn't PVE 4.3 not able to "migrate" or change VMs from "dead" nodes to another "living" node? The only condition is a non-local storage.
Yesterday I had a crash on node2 caused by a driver. I was clear that I needed some time to restore it. I thought that I should be able to migrate (offline) my crashed VM's to node1 or node3. But fail: PVE told me that there is no connection to node2! This is clear but I know that path "/etc/pve" is available on all nodes synchronized.
As consequence I logged in the console of node3 and I moved vm's conf files from node2 to node3 manually. Afterwards I was able to start VM's on node 3. Even after restored node2 there were no issue because node2 got the latest pve configs from corosync and manually "migrated" VMs didn't start there correctly.
Why isn't PVE 4.3 not able to "migrate" or change VMs from "dead" nodes to another "living" node? The only condition is a non-local storage.