Hello,
I got switch troubles this weekend, after swapping of the network switch everything was fine again.
But currently 1 node is out of the cluster:
I tried to add it back to the cluster but it got running vm's so thats not possible.
Hope you guys can point me to the right solution.
I tried to add it to the cluster after running:
But got the error:
I tried to shutdown the vm, and add an .conf file to another working node but could not create the conf file because it existed but its not there.
So bassicly i wont to move the vm's to an other node to readd the node.
Ceph is running and working on the lost node.
I got switch troubles this weekend, after swapping of the network switch everything was fine again.
But currently 1 node is out of the cluster:
I tried to add it back to the cluster but it got running vm's so thats not possible.
Hope you guys can point me to the right solution.
Code:
Cluster information
-------------------
Name: prox-cluster01
Config Version: 14
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Mon Feb 15 16:49:58 2021
Quorum provider: corosync_votequorum
Nodes: 5
Node ID: 0x00000001
Ring ID: 1.1676
Quorate: Yes
Votequorum information
----------------------
Expected votes: 6
Highest expected: 6
Total votes: 5
Quorum: 4
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000001 1 192.168.15.91 (local)
0x00000002 1 192.168.15.92
0x00000003 1 192.168.15.93
0x00000004 1 192.168.15.94
0x00000007 1 192.168.15.96
I tried to add it to the cluster after running:
Code:
systemctl stop pve-cluster
systemctl stop corosync
pmxcfs -l
rm /etc/pve/corosync.conf
rm -r /etc/corosync/*
killall pmxcfs
systemctl start pve-cluster
But got the error:
Code:
root@prox-s05:~# pvecm add 192.168.15.91
Please enter superuser (root) password for '192.168.15.91': ****************
detected the following error(s):
* this host already contains virtual guests
Check if node may join a cluster failed!
I tried to shutdown the vm, and add an .conf file to another working node but could not create the conf file because it existed but its not there.
So bassicly i wont to move the vm's to an other node to readd the node.
Ceph is running and working on the lost node.
Last edited: