Issue Statement:
I am trying to verify my Proxmox HA configuration by rebooting one of the cluster node & expecting the VM to failover to the surviving node.
Outcome: VM freezes on its home node during the reboot & comes online once its home node becomes online.
Error:
a) VM (ID = 104) fails to start on the failover node ( HPE-DL365-14-180).
b) " can't activate LV '/dev/vm5_boot_vg/vm-104-disk-0': Cannot process volume group vm5_boot_vg "
root@HPE-DL365-14-176:~# pveversion
pve-manager/9.0.3/025864202ebb6109 (running kernel: 6.14.8-2-pve)
root@HPE-DL365-14-176:~# crm_node -l
1 HPE-DL365-14-176 member
2 HPE-DL365-14-180 member
root@HPE-DL365-14-176:~#
root@HPE-DL365-14-176:~# pvecm status
Cluster information
-------------------
Name: procluster
Config Version: 13
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Wed Oct 15 20:15:27 2025
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000001
Ring ID: 1.1e7
Quorate: Yes
Votequorum information
----------------------
Expected votes: 3
Highest expected: 3
Total votes: 3
Quorum: 2
Flags: Quorate Qdevice
Membership information
----------------------
Nodeid Votes Qdevice Name
0x00000001 1 A,V,NMW 10.141.14.176 (local)
0x00000002 1 A,V,NMW 10.141.14.180
0x00000000 1 Qdevice
root@HPE-DL365-14-176:~#
Before:
root@HPE-DL365-14-176:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 3 0 wz--n- <446.07g 16.00g
vg_VM1_176_boot 1 1 0 wz--n- <160.00g <80.00g
vg_VM3_180_boot 1 1 0 wz--n- <160.00g <110.00g
vm5_boot_vg 1 1 0 wz--n- <160.00g <120.00g
root@HPE-DL365-14-176:~#
root@HPE-DL365-14-176:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-a-tz-- <319.55g 0.00 0.52
root pve -wi-ao---- 96.00g
swap pve -wi-ao---- 8.00g
vm-100-disk-0 vg_VM1_176_boot -wi------- 80.00g
vm-101-disk-0 vg_VM3_180_boot -wi------- 50.00g
vm-104-disk-0 vm5_boot_vg -wi-ao---- 40.00g
root@HPE-DL365-14-176:/var/log# reboot
root@HPE-DL365-14-176:/var/log# Connection to 10.141.14.176 closed by remote host.
Connection to 10.141.14.176 closed.
[root@scs000863528 roce_tcp_tg]# date
Wed Oct 15 18:43:31 IST 2025
[root@scs000863528 roce_tcp_tg]#
After:
root@HPE-DL365-14-180:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 3 0 wz--n- <446.07g 16.00g
vg_VM1_176_boot 1 1 0 wz--n- <160.00g <80.00g
vg_VM3_180_boot 1 1 0 wz--n- <160.00g <110.00g
vm5_boot_vg 1 1 0 wz--n- <160.00g <120.00g
root@HPE-DL365-14-180:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-a-tz-- <319.55g 0.00 0.52
root pve -wi-ao---- 96.00g
swap pve -wi-ao---- 8.00g
vm-100-disk-0 vg_VM1_176_boot -wi------- 80.00g
vm-101-disk-0 vg_VM3_180_boot -wi------- 50.00g
vm-104-disk-0 vm5_boot_vg -wi------- 40.00g
root@HPE-DL365-14-180:~#
root@HPE-DL365-14-180:~# journalctl | grep 104
Oct 15 17:57:44 HPE-DL365-14-180 qm[574594]: start VM 104: UPID:HPE-DL365-14-180:0008C482:007BC0B7:68EF9340:qmstart:104:root@pam:
Oct 15 17:57:44 HPE-DL365-14-180 qm[574593]: <root@pam> starting task UPID:HPE-DL365-14-180:0008C482:007BC0B7:68EF9340:qmstart:104:root@pam:
Oct 15 17:57:44 HPE-DL365-14-180 qm[574594]: can't activate LV '/dev/vm5_boot_vg/vm-104-disk-0': Cannot process volume group vm5_boot_vg
Oct 15 17:57:44 HPE-DL365-14-180 qm[574593]: <root@pam> end task UPID:HPE-DL365-14-180:0008C482:007BC0B7:68EF9340:qmstart:104:root@pam: can't activate LV '/dev/vm5_boot_vg/vm-104-disk-0': Cannot process volume group vm5_boot_vg
Oct 15 17:57:45 HPE-DL365-14-180 qm[574607]: <root@pam> starting task UPID:HPE-DL365-14-180:0008C497:007BC0FC:68EF9341:qmstop:104:root@pam:
Oct 15 17:57:45 HPE-DL365-14-180 qm[574615]: stop VM 104: UPID:HPE-DL365-14-180:0008C497:007BC0FC:68EF9341:qmstop:104:root@pam:
Oct 15 17:57:45 HPE-DL365-14-180 qm[574607]: <root@pam> end task UPID:HPE-DL365-14-180:0008C497:007BC0FC:68EF9341:qmstop:104:root@pam: OK
Oct 15 18:01:25 HPE-DL365-14-180 qm[582915]: <root@pam> starting task UPID:HPE-DL365-14-180:0008E51F:007C1711:68EF941D:qmstart:104:root@pam:
Oct 15 18:01:25 HPE-DL365-14-180 qm[582943]: start VM 104: UPID:HPE-DL365-14-180:0008E51F:007C1711:68EF941D:qmstart:104:root@pam:
Oct 15 18:01:26 HPE-DL365-14-180 systemd[1]: Started 104.scope.
Oct 15 18:01:26 HPE-DL365-14-180 kernel: tap104i0: entered promiscuous mode
Oct 15 18:01:26 HPE-DL365-14-180 kernel: vmbr0: port 2(fwpr104p0) entered blocking state
Oct 15 18:01:26 HPE-DL365-14-180 kernel: vmbr0: port 2(fwpr104p0) entered disabled state
Oct 15 18:01:26 HPE-DL365-14-180 kernel: fwpr104p0: entered allmulticast mode
Oct 15 18:01:26 HPE-DL365-14-180 kernel: fwpr104p0: entered promiscuous mode
Oct 15 18:01:26 HPE-DL365-14-180 kernel: vmbr0: port 2(fwpr104p0) entered blocking state
Oct 15 18:01:26 HPE-DL365-14-180 kernel: vmbr0: port 2(fwpr104p0) entered forwarding state
Oct 15 18:01:26 HPE-DL365-14-180 kernel: fwbr104i0: port 1(fwln104i0) entered blocking state
Oct 15 18:01:26 HPE-DL365-14-180 kernel: fwbr104i0: port 1(fwln104i0) entered disabled state
Oct 15 18:01:26 HPE-DL365-14-180 kernel: fwln104i0: entered allmulticast mode
Oct 15 18:01:26 HPE-DL365-14-180 kernel: fwln104i0: entered promiscuous mode
Oct 15 18:01:26 HPE-DL365-14-180 kernel: fwbr104i0: port 1(fwln104i0) entered blocking state
Oct 15 18:01:26 HPE-DL365-14-180 kernel: fwbr104i0: port 1(fwln104i0) entered forwarding state
Oct 15 18:01:26 HPE-DL365-14-180 kernel: fwbr104i0: port 2(tap104i0) entered blocking state
Oct 15 18:01:26 HPE-DL365-14-180 kernel: fwbr104i0: port 2(tap104i0) entered disabled state
Oct 15 18:01:26 HPE-DL365-14-180 kernel: tap104i0: entered allmulticast mode
Oct 15 18:01:26 HPE-DL365-14-180 kernel: fwbr104i0: port 2(tap104i0) entered blocking state
Oct 15 18:01:26 HPE-DL365-14-180 kernel: fwbr104i0: port 2(tap104i0) entered forwarding state
Oct 15 18:01:26 HPE-DL365-14-180 qm[582943]: VM 104 started with PID 582974.
Oct 15 18:01:26 HPE-DL365-14-180 systemd[1]: Started pve-dbus-vmstate@104.service - PVE DBus VMState Helper (VM 104).
Oct 15 18:01:26 HPE-DL365-14-180 qm[582915]: <root@pam> end task UPID:HPE-DL365-14-180:0008E51F:007C1711:68EF941D:qmstart:104:root@pam: WARNINGS: 1
Oct 15 18:01:26 HPE-DL365-14-180 dbus-vmstate[583165]: pve-vmstate-104 listening on :1.79
Oct 15 18:01:39 HPE-DL365-14-180 pvesh[583278]: stopping dbus-vmstate helper for VM 104
Oct 15 18:01:39 HPE-DL365-14-180 systemd[1]: pve-dbus-vmstate@104.service: Deactivated successfully.
Oct 15 18:05:34 HPE-DL365-14-180 pvedaemon[576349]: <root@pam> starting task UPID:HPE-DL365-14-180:0008ED38:007C7840:68EF9516:qmigrate:104:root@pam:
Oct 15 18:05:34 HPE-DL365-14-180 systemd[1]: Started pve-dbus-vmstate@104.service - PVE DBus VMState Helper (VM 104).
Oct 15 18:05:34 HPE-DL365-14-180 dbus-vmstate[585018]: pve-vmstate-104 listening on :1.91
Oct 15 18:05:47 HPE-DL365-14-180 systemd[1]: pve-dbus-vmstate@104.service: Deactivated successfully.
Oct 15 18:05:49 HPE-DL365-14-180 kernel: tap104i0: left allmulticast mode
Oct 15 18:05:49 HPE-DL365-14-180 kernel: fwbr104i0: port 2(tap104i0) entered disabled state
Oct 15 18:05:49 HPE-DL365-14-180 kernel: fwbr104i0: port 1(fwln104i0) entered disabled state
Oct 15 18:05:49 HPE-DL365-14-180 kernel: vmbr0: port 2(fwpr104p0) entered disabled state
Oct 15 18:05:49 HPE-DL365-14-180 kernel: fwln104i0 (unregistering): left allmulticast mode
Oct 15 18:05:49 HPE-DL365-14-180 kernel: fwln104i0 (unregistering): left promiscuous mode
Oct 15 18:05:49 HPE-DL365-14-180 kernel: fwbr104i0: port 1(fwln104i0) entered disabled state
Oct 15 18:05:49 HPE-DL365-14-180 kernel: fwpr104p0 (unregistering): left allmulticast mode
Oct 15 18:05:49 HPE-DL365-14-180 kernel: fwpr104p0 (unregistering): left promiscuous mode
Oct 15 18:05:49 HPE-DL365-14-180 kernel: vmbr0: port 2(fwpr104p0) entered disabled state
Oct 15 18:05:50 HPE-DL365-14-180 systemd[1]: 104.scope: Deactivated successfully.
Oct 15 18:05:50 HPE-DL365-14-180 systemd[1]: 104.scope: Consumed 3.499s CPU time, 1.4G memory peak.
Oct 15 18:05:50 HPE-DL365-14-180 pvedaemon[576349]: <root@pam> end task UPID:HPE-DL365-14-180:0008ED38:007C7840:68EF9516:qmigrate:104:root@pam: OK
root@HPE-DL365-14-180:~#
root@HPE-DL365-14-180:~# crm_verify --live-check -V
error: Resource start-up disabled since no STONITH resources have been defined
error: Either configure some or disable STONITH with the stonith-enabled option
error: NOTE: Clusters with shared data need STONITH to ensure data integrity
warning: Node HPE-DL365-14-178 is unclean but cannot be fenced
error: CIB did not pass schema validation
Configuration invalid (with errors)
root@HPE-DL365-14-180:~#
root@HPE-DL365-14-176:~# crm_verify --live-check -V
error: Resource start-up disabled since no STONITH resources have been defined
error: Either configure some or disable STONITH with the stonith-enabled option
error: NOTE: Clusters with shared data need STONITH to ensure data integrity
error: CIB did not pass schema validation
Configuration invalid (with errors)
root@HPE-DL365-14-176:~#
Query:
ONLINE VM Migration works fine, but node reboot fails to place the VM on the surviving node.
Not sure if i missed anything in terms of the HA configuration. One thing thats looking strange for me is the pacemaker STONITH references. Do we need to configure STONITH ??
I am trying to verify my Proxmox HA configuration by rebooting one of the cluster node & expecting the VM to failover to the surviving node.
Outcome: VM freezes on its home node during the reboot & comes online once its home node becomes online.
Error:
a) VM (ID = 104) fails to start on the failover node ( HPE-DL365-14-180).
b) " can't activate LV '/dev/vm5_boot_vg/vm-104-disk-0': Cannot process volume group vm5_boot_vg "
root@HPE-DL365-14-176:~# pveversion
pve-manager/9.0.3/025864202ebb6109 (running kernel: 6.14.8-2-pve)
root@HPE-DL365-14-176:~# crm_node -l
1 HPE-DL365-14-176 member
2 HPE-DL365-14-180 member
root@HPE-DL365-14-176:~#
root@HPE-DL365-14-176:~# pvecm status
Cluster information
-------------------
Name: procluster
Config Version: 13
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Wed Oct 15 20:15:27 2025
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000001
Ring ID: 1.1e7
Quorate: Yes
Votequorum information
----------------------
Expected votes: 3
Highest expected: 3
Total votes: 3
Quorum: 2
Flags: Quorate Qdevice
Membership information
----------------------
Nodeid Votes Qdevice Name
0x00000001 1 A,V,NMW 10.141.14.176 (local)
0x00000002 1 A,V,NMW 10.141.14.180
0x00000000 1 Qdevice
root@HPE-DL365-14-176:~#
Before:
root@HPE-DL365-14-176:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 3 0 wz--n- <446.07g 16.00g
vg_VM1_176_boot 1 1 0 wz--n- <160.00g <80.00g
vg_VM3_180_boot 1 1 0 wz--n- <160.00g <110.00g
vm5_boot_vg 1 1 0 wz--n- <160.00g <120.00g
root@HPE-DL365-14-176:~#
root@HPE-DL365-14-176:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-a-tz-- <319.55g 0.00 0.52
root pve -wi-ao---- 96.00g
swap pve -wi-ao---- 8.00g
vm-100-disk-0 vg_VM1_176_boot -wi------- 80.00g
vm-101-disk-0 vg_VM3_180_boot -wi------- 50.00g
vm-104-disk-0 vm5_boot_vg -wi-ao---- 40.00g
root@HPE-DL365-14-176:/var/log# reboot
root@HPE-DL365-14-176:/var/log# Connection to 10.141.14.176 closed by remote host.
Connection to 10.141.14.176 closed.
[root@scs000863528 roce_tcp_tg]# date
Wed Oct 15 18:43:31 IST 2025
[root@scs000863528 roce_tcp_tg]#
After:
root@HPE-DL365-14-180:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 3 0 wz--n- <446.07g 16.00g
vg_VM1_176_boot 1 1 0 wz--n- <160.00g <80.00g
vg_VM3_180_boot 1 1 0 wz--n- <160.00g <110.00g
vm5_boot_vg 1 1 0 wz--n- <160.00g <120.00g
root@HPE-DL365-14-180:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-a-tz-- <319.55g 0.00 0.52
root pve -wi-ao---- 96.00g
swap pve -wi-ao---- 8.00g
vm-100-disk-0 vg_VM1_176_boot -wi------- 80.00g
vm-101-disk-0 vg_VM3_180_boot -wi------- 50.00g
vm-104-disk-0 vm5_boot_vg -wi------- 40.00g
root@HPE-DL365-14-180:~#
root@HPE-DL365-14-180:~# journalctl | grep 104
Oct 15 17:57:44 HPE-DL365-14-180 qm[574594]: start VM 104: UPID:HPE-DL365-14-180:0008C482:007BC0B7:68EF9340:qmstart:104:root@pam:
Oct 15 17:57:44 HPE-DL365-14-180 qm[574593]: <root@pam> starting task UPID:HPE-DL365-14-180:0008C482:007BC0B7:68EF9340:qmstart:104:root@pam:
Oct 15 17:57:44 HPE-DL365-14-180 qm[574594]: can't activate LV '/dev/vm5_boot_vg/vm-104-disk-0': Cannot process volume group vm5_boot_vg
Oct 15 17:57:44 HPE-DL365-14-180 qm[574593]: <root@pam> end task UPID:HPE-DL365-14-180:0008C482:007BC0B7:68EF9340:qmstart:104:root@pam: can't activate LV '/dev/vm5_boot_vg/vm-104-disk-0': Cannot process volume group vm5_boot_vg
Oct 15 17:57:45 HPE-DL365-14-180 qm[574607]: <root@pam> starting task UPID:HPE-DL365-14-180:0008C497:007BC0FC:68EF9341:qmstop:104:root@pam:
Oct 15 17:57:45 HPE-DL365-14-180 qm[574615]: stop VM 104: UPID:HPE-DL365-14-180:0008C497:007BC0FC:68EF9341:qmstop:104:root@pam:
Oct 15 17:57:45 HPE-DL365-14-180 qm[574607]: <root@pam> end task UPID:HPE-DL365-14-180:0008C497:007BC0FC:68EF9341:qmstop:104:root@pam: OK
Oct 15 18:01:25 HPE-DL365-14-180 qm[582915]: <root@pam> starting task UPID:HPE-DL365-14-180:0008E51F:007C1711:68EF941D:qmstart:104:root@pam:
Oct 15 18:01:25 HPE-DL365-14-180 qm[582943]: start VM 104: UPID:HPE-DL365-14-180:0008E51F:007C1711:68EF941D:qmstart:104:root@pam:
Oct 15 18:01:26 HPE-DL365-14-180 systemd[1]: Started 104.scope.
Oct 15 18:01:26 HPE-DL365-14-180 kernel: tap104i0: entered promiscuous mode
Oct 15 18:01:26 HPE-DL365-14-180 kernel: vmbr0: port 2(fwpr104p0) entered blocking state
Oct 15 18:01:26 HPE-DL365-14-180 kernel: vmbr0: port 2(fwpr104p0) entered disabled state
Oct 15 18:01:26 HPE-DL365-14-180 kernel: fwpr104p0: entered allmulticast mode
Oct 15 18:01:26 HPE-DL365-14-180 kernel: fwpr104p0: entered promiscuous mode
Oct 15 18:01:26 HPE-DL365-14-180 kernel: vmbr0: port 2(fwpr104p0) entered blocking state
Oct 15 18:01:26 HPE-DL365-14-180 kernel: vmbr0: port 2(fwpr104p0) entered forwarding state
Oct 15 18:01:26 HPE-DL365-14-180 kernel: fwbr104i0: port 1(fwln104i0) entered blocking state
Oct 15 18:01:26 HPE-DL365-14-180 kernel: fwbr104i0: port 1(fwln104i0) entered disabled state
Oct 15 18:01:26 HPE-DL365-14-180 kernel: fwln104i0: entered allmulticast mode
Oct 15 18:01:26 HPE-DL365-14-180 kernel: fwln104i0: entered promiscuous mode
Oct 15 18:01:26 HPE-DL365-14-180 kernel: fwbr104i0: port 1(fwln104i0) entered blocking state
Oct 15 18:01:26 HPE-DL365-14-180 kernel: fwbr104i0: port 1(fwln104i0) entered forwarding state
Oct 15 18:01:26 HPE-DL365-14-180 kernel: fwbr104i0: port 2(tap104i0) entered blocking state
Oct 15 18:01:26 HPE-DL365-14-180 kernel: fwbr104i0: port 2(tap104i0) entered disabled state
Oct 15 18:01:26 HPE-DL365-14-180 kernel: tap104i0: entered allmulticast mode
Oct 15 18:01:26 HPE-DL365-14-180 kernel: fwbr104i0: port 2(tap104i0) entered blocking state
Oct 15 18:01:26 HPE-DL365-14-180 kernel: fwbr104i0: port 2(tap104i0) entered forwarding state
Oct 15 18:01:26 HPE-DL365-14-180 qm[582943]: VM 104 started with PID 582974.
Oct 15 18:01:26 HPE-DL365-14-180 systemd[1]: Started pve-dbus-vmstate@104.service - PVE DBus VMState Helper (VM 104).
Oct 15 18:01:26 HPE-DL365-14-180 qm[582915]: <root@pam> end task UPID:HPE-DL365-14-180:0008E51F:007C1711:68EF941D:qmstart:104:root@pam: WARNINGS: 1
Oct 15 18:01:26 HPE-DL365-14-180 dbus-vmstate[583165]: pve-vmstate-104 listening on :1.79
Oct 15 18:01:39 HPE-DL365-14-180 pvesh[583278]: stopping dbus-vmstate helper for VM 104
Oct 15 18:01:39 HPE-DL365-14-180 systemd[1]: pve-dbus-vmstate@104.service: Deactivated successfully.
Oct 15 18:05:34 HPE-DL365-14-180 pvedaemon[576349]: <root@pam> starting task UPID:HPE-DL365-14-180:0008ED38:007C7840:68EF9516:qmigrate:104:root@pam:
Oct 15 18:05:34 HPE-DL365-14-180 systemd[1]: Started pve-dbus-vmstate@104.service - PVE DBus VMState Helper (VM 104).
Oct 15 18:05:34 HPE-DL365-14-180 dbus-vmstate[585018]: pve-vmstate-104 listening on :1.91
Oct 15 18:05:47 HPE-DL365-14-180 systemd[1]: pve-dbus-vmstate@104.service: Deactivated successfully.
Oct 15 18:05:49 HPE-DL365-14-180 kernel: tap104i0: left allmulticast mode
Oct 15 18:05:49 HPE-DL365-14-180 kernel: fwbr104i0: port 2(tap104i0) entered disabled state
Oct 15 18:05:49 HPE-DL365-14-180 kernel: fwbr104i0: port 1(fwln104i0) entered disabled state
Oct 15 18:05:49 HPE-DL365-14-180 kernel: vmbr0: port 2(fwpr104p0) entered disabled state
Oct 15 18:05:49 HPE-DL365-14-180 kernel: fwln104i0 (unregistering): left allmulticast mode
Oct 15 18:05:49 HPE-DL365-14-180 kernel: fwln104i0 (unregistering): left promiscuous mode
Oct 15 18:05:49 HPE-DL365-14-180 kernel: fwbr104i0: port 1(fwln104i0) entered disabled state
Oct 15 18:05:49 HPE-DL365-14-180 kernel: fwpr104p0 (unregistering): left allmulticast mode
Oct 15 18:05:49 HPE-DL365-14-180 kernel: fwpr104p0 (unregistering): left promiscuous mode
Oct 15 18:05:49 HPE-DL365-14-180 kernel: vmbr0: port 2(fwpr104p0) entered disabled state
Oct 15 18:05:50 HPE-DL365-14-180 systemd[1]: 104.scope: Deactivated successfully.
Oct 15 18:05:50 HPE-DL365-14-180 systemd[1]: 104.scope: Consumed 3.499s CPU time, 1.4G memory peak.
Oct 15 18:05:50 HPE-DL365-14-180 pvedaemon[576349]: <root@pam> end task UPID:HPE-DL365-14-180:0008ED38:007C7840:68EF9516:qmigrate:104:root@pam: OK
root@HPE-DL365-14-180:~#
root@HPE-DL365-14-180:~# crm_verify --live-check -V
error: Resource start-up disabled since no STONITH resources have been defined
error: Either configure some or disable STONITH with the stonith-enabled option
error: NOTE: Clusters with shared data need STONITH to ensure data integrity
warning: Node HPE-DL365-14-178 is unclean but cannot be fenced
error: CIB did not pass schema validation
Configuration invalid (with errors)
root@HPE-DL365-14-180:~#
root@HPE-DL365-14-176:~# crm_verify --live-check -V
error: Resource start-up disabled since no STONITH resources have been defined
error: Either configure some or disable STONITH with the stonith-enabled option
error: NOTE: Clusters with shared data need STONITH to ensure data integrity
error: CIB did not pass schema validation
Configuration invalid (with errors)
root@HPE-DL365-14-176:~#
Query:
ONLINE VM Migration works fine, but node reboot fails to place the VM on the surviving node.
Not sure if i missed anything in terms of the HA configuration. One thing thats looking strange for me is the pacemaker STONITH references. Do we need to configure STONITH ??