I have a home lab cluster with 3 PVE nodes running Proxmox 8.3.3 with Ceph 19.2.0. One of the nodes blew out its OS drive and now it won't boot. I've ordered new drives (WD Red) and need to start with rebuilding the lost node. I'd like to build it back "in place" - e.g. same name, IP, etc. instead of calling it pve-04 and perpetually having a skip in my naming scheme forever.
Before I rebuild, is there anything I should do on the current systems? Of course, Ceph is angry and shows the Monitor, Manager, and Meta Data Server services down - there's no quorum. I'm ok with wiping the Ceph drive in the orphaned system as part of the rebuild process and let Ceph re-copy the missing blocks.
Is there anything to do on the Proxmox side? Is there anything special to remove the pve from the cluster?
Once I get the orphaned system fixed, what would be the process to rebuild the others on the WD Red drives? Is there something I should do to more gracefully pull them from the cluster before shutting them down?
TIA
Before I rebuild, is there anything I should do on the current systems? Of course, Ceph is angry and shows the Monitor, Manager, and Meta Data Server services down - there's no quorum. I'm ok with wiping the Ceph drive in the orphaned system as part of the rebuild process and let Ceph re-copy the missing blocks.
Is there anything to do on the Proxmox side? Is there anything special to remove the pve from the cluster?
Once I get the orphaned system fixed, what would be the process to rebuild the others on the WD Red drives? Is there something I should do to more gracefully pull them from the cluster before shutting them down?
TIA