Replace a node


New Member
Dec 3, 2018

I have made a PoC with 3 nodes, 3 OSD HDD of 3To per node (so 9 OSDs, replication size of 3 on the hosts, min of 1, test with Filestore (journal on OSD), then Bluestore. Perf in reading are too bad with BS (as mentioned in the documentation, the cluster is too small, specially with HDD, for good perf).

So I will revert back to FS and add a SSD that will contain the PVE host and 3 partitions for the journal of each OSD (as explained here

My question concern the more efficient way to wipe out a node :

A. Just a backup
As explained here :
- backup the following dirs :
  • /root/pve-cluster-backup.tar.gz
  • /root/ssh-backup.tar.gz
  • /root/corosync-backup.tar.gz
  • /root/hosts
  • /root/interfaces
- shut down the node
- reinstall PVE
- create all the partitions needed for journal
- reboot/copy the directory back as mentioned in the link above
- when back, destroy all the OSDs of the node and recreate them in FS with journal and wait for health state before doing the same on the next node.

Questions :
1. Is this still accurate for version 5 (the procedure has indeed disappeared from the documentation for the new version) ?
2. if yes, do I need to out the OSD or can I just stop them, destroy them and recreate them in FS ?

B. Or Remove the node of the cluster
- destroy all the OSDs of the node (as at least 1 copy is on an other node, so it should not hurt)
- switch off the server
- remove the node from the PVE cluster (as mentionne here
- reinstall it
- add the node back in the cluster

If doing so, I assume the ceph node/mon of the re-installed node still exists. Indeed, I think it would be complicated to remove the ceph node and have a 2 nodes ceph cluster (and it will be complicated to have the 3 copies on the 2 nodes). So I would like to keep the monitor in the cluster configuration.

Questions :
3. If I do that, will I be able to reinstall the new ceph monitor on the re-installed node ? Indeed, as it still exists, I am not sure it will allow me to use "pveceph createmon", and probably will shoot an error saying that a mon with the same IP already exists.
4. Then would it be possible to declare that the ceph mon is back (and not a new mon) ?

An finally :
5. What would be the more efficient way to do it A. or B. ?

Thanks for your hints :)


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!