Removing Slave from a Cluster

Petrus4

Member
Feb 18, 2009
249
0
16
I just removed a slave on Node 2 from the cluster by using this command on the Master

Code:
pveca -d 2

When I now go to the Slave webgui it does not show any virtual machines and gives an error:

Unable to load local cluster table

Is there a command I must execute on the slave in order to make this work. I could not find info on this in documentation.

thanks.
 
I guess you have to restart the 'pvedaemon'

# /etc/init.d/pvedaemon restart

Does that help?

Thanks Dietmar,

I got your response after I had already tried a few things (see below)

I did a:

Code:
rm /etc/pve/cluster.cfg

and stopped the ClusterSync and ClusterTunnel services.

I probably only needed to stop these services.

Now all seems to work again.

It might be an idea add this to the documentation or have a command to run on the slave after it has been removed by the master.
 
Perhaps it is possible to make a button on each node in the web gui in the cluster section, so that you can do a remove of that node?

BTW: I just made my cluster with an AMD and an Intel Atom... live migration rocks... both ways and also with online migration... great work!
 
Delete a node is a quite unusual case, only there to remove a damaged node from the cluster. I know, the current code makes it quite easy to add/remove nodes, but this will not be the case when we use corosync for cluster communication.

- Dietmar
 
Delete a node is a quite unusual case, only there to remove a damaged node from the cluster. I know, the current code makes it quite easy to add/remove nodes, but this will not be the case when we use corosync for cluster communication.

- Dietmar

I don't know corosync , but will have a look at it, but what if a node is beyond repair then you always get a warning about this node because syncing is a problem. It seems to me that there should always be a way to remove a node out of a cluster.

Due to the current possibilities I was able to test how the live migration worked, after this I had to free the second machine again for its "normal" use.

BTW: will this change to corosync gives us a problem with upgrading to the next version of proxmox?
 
but what if a node is beyond repair then you always get a warning about this node because syncing is a problem. It seems to me that there should always be a way to remove a node out of a cluster.

That case already works. The problem is when you remove a functional node (and you then do not remove that node from the network).

BTW: will this change to corosync gives us a problem with upgrading to the next version of proxmox?

We will provide an upgrade path.