Thank you!!
Stopping pve-ha-crm.service, and editing the manager_status has solved the problem.
The Status unknown to online changed, when I created a HA Group and added a service.
Thanks for your help. Hope this thread help others too.
Regards,
johann
Thanks for your reply.
I want to test the way with stop pve-ha-crm.service first.
Removing all HA Services and Groups should be fine, and this action has no effect on running VMs and CTs, or?
Hi,
I recently restarted the nodes due to updates. Now srv1 is for about 2 weeks in "fence" state.
HA migration, etc. is working on srv2 and srv3. This is why all VMs are currently on these nodes.
Full output of ha-manager status -v
root@srv2:~# ha-manager status -v
quorum OK
master srv2...
When i start a VM migration with HA, the log says:
Aug 24 20:41:06 srv2 pve-ha-crm[2634]: crm command error - node not online: migrate vm:105 srv1
I don't understand why. Quorum is okay and the servers can ping each other.
Sorry for posting so much. But i don't know, what to do.
I have found out some more things:
ha-manager status -v shows that node 1 hangs permanently in "fence" mode
"srv1" : {
"mode" : "active",
"results" : {
"25njyxxxxxxxDPIZFYQPw" : {
"exit_code" : 0,
"sid" : "vm:127"...
Hi there,
since today, I have an issue with HA online Migration. If I try to migrate an online VM, the following output occurs:
Requesting HA migration for VM 127 to node srv1
TASK OK
After that nothing is doing.
If I shut down the VM, I can migrate with HA. And if I remove the HA entry for...
Hallo liebe Community,
ich habe vor ca. 4 Wochen von iSCSI (auf einem NAS) auf Ceph (RBD) umgestellt. Die Migration lief ohne Probleme.
Mein Cluster:
3 Nodes
pro Node 2x 1TB SATA SSD
als Ceph Network 10GBit/s
Heute habe ich eine 300GB VM Disk auf den Ceph Pool von iSCSI (auf einem NAS)...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.