Safely remove node from cluster without deleting containers?

Jan 26, 2011
82
2
6
Due instability issues we're having with a Proxmox 3.1 cluster, we need to remove several servers from the cluster without migrating the OpenVZ containers off of them. This thread is not about the instability of the cluster, if you want to read about that, go here: http://forum.proxmox.com/threads/17663-Proxmox-3-1-kernel-crash-that-takes-other-servers-offline-too

I know the docs say that you have to migrate everything off the nodes first, but I can't think why that would be necessary since I'm only running OpenVZ containers with no HA features, etc. They really don't need to be part of a cluster and I just want them to each run as a standalone machines.

Will running "pvecm delnode [node]" attempt to delete containers off the node? If so, can't I just remove the master's access to the node by removing the master from /root/.ssh/authorized_keys to prevent that from happening?

I suppose after I've removed it from the master, there'd be some work to turn off clustering on the node... perhaps this:

/etc/init.d/pve-cluster stop
/etc/init.d/cman stop
umount /etc/pve

...and adjust startup scripts so these things are no longer running. Is there anything else that would need to be turned off?
 
A quick follow up here... I was successfully able to get this done, but there was definitely more to it than what I originally thought. Here's roughly what I did:

Run from Node:

Remove the master's access from the node:

mv /root/.ssh/authorized_keys /root/.ssh/authorized_keys_proxmox

# save contents of openvz conf dir...
mkdir /etc/vz/newconf
cp -a /etc/vz/conf/* /etc/vz/newconf/

# stop cluster services...
/etc/init.d/pve-cluster stop
/etc/init.d/cman stop

# disable cluster services at boot...
update-rc.d -f pve-cluster remove
update-rc.d -f cman remove

# drop fuse file system...
fusermount -u /etc/pve

# delete sym link that points to /etc/pve/ovenvz...
rm /etc/vz/conf

# restore openvz conf paths (lost when dropping /etc/pve)...
cp -a /etc/vz/newconf /etc/vz/conf
ln -s /etc/vz/conf /etc/pve/openvz

Run from Cluster Master:

# confirm node name you want to remove:
pvecm nodes

# remove node from cluster:
pvecm delnode NodeName

# remove from interface:
mkdir /root/removed_pve
mv /etc/pve/nodes/NodeName /root/removed_pve/

Warning: Don't do this unless you understand each step of the process and don't mind putting the node in state where you won't easily be able to re-add it back to a cluster. Also note that after I did this, the containers on the node do not start up automatically, even though their config file says they are supposed to.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!