I migrated all guests to the remaining node, and removed the empty node from the cluster. I then tried to add a new pve server to the cluster. Unfortunatelly I had not deleted the entry of the removed node from known_hosts file on the remaining node. As the new node has the same hostname as the removed node, I was not yet successfull.
The situation now is that:
- the datacenter contains the new nodename but it is red - looking at the remaining node's webinterface
- the new node is not accessible by webinterface anymore, but I still can login via ssh
Is this a problem of unsucessfull exchange of certificates? Or did it not work because I should have issued a pvecm expected 1 first? However although I am prepared to reinstall everything and restore a bunch of vm's I would love to fix this as elseway it would take a lot of more time.
Thanks for any help on howto restore this cluster. I will provide any informations needed that can help to fix this. The 2node cluster has a third server with qdedice installed by the way as third vote for the quorum.
Logging in via ssh and looking at pvecm status I seen that the new node had still somehow created a cluster config, so I went through a guide removing cluster config altogether and rebooted. Now webinterface access is restored.
I now want to remove the appearance of non functional node2 on webinterface of the remaining node (the one with all the vms on i) to then be able to add the new node to that cluster - while the new node will have the same ip and name of the removed node.
Realized that removed node1 was still in webinterface because it was on /etc/corosyncd/corosync.conf. Removed it from there and restarted corosync and pvestatd made it disapear from webinterface.
The situation now is that:
- the datacenter contains the new nodename but it is red - looking at the remaining node's webinterface
- the new node is not accessible by webinterface anymore, but I still can login via ssh
Is this a problem of unsucessfull exchange of certificates? Or did it not work because I should have issued a pvecm expected 1 first? However although I am prepared to reinstall everything and restore a bunch of vm's I would love to fix this as elseway it would take a lot of more time.
Thanks for any help on howto restore this cluster. I will provide any informations needed that can help to fix this. The 2node cluster has a third server with qdedice installed by the way as third vote for the quorum.
Logging in via ssh and looking at pvecm status I seen that the new node had still somehow created a cluster config, so I went through a guide removing cluster config altogether and rebooted. Now webinterface access is restored.
I now want to remove the appearance of non functional node2 on webinterface of the remaining node (the one with all the vms on i) to then be able to add the new node to that cluster - while the new node will have the same ip and name of the removed node.
Realized that removed node1 was still in webinterface because it was on /etc/corosyncd/corosync.conf. Removed it from there and restarted corosync and pvestatd made it disapear from webinterface.
Last edited: