Ceph after node reinstall.

Geonon

New Member
Apr 8, 2024
1
0
1
I had a cluster node fail due to hard-drive failure. I replaced the failed hdd, and reinstalled the node using a different hostname and IP.

My question is. How do I clean up my Ceph installation? It's still showing the OSD linked the old node when looking at the OSD's under Ceph. Also, when I look at the disk of the node that was reinstalled. It shows under usage LVM, Ceph (OSD.0) for the dedicated Ceph disk.

Do I just need to wipe the disk and migrate the OSD to the reinstalled node?

Thanks.