Hello,
I want to take an old, worn-out SSD out of a 3-node Ceph Reef cluster of ours, osd.48.
Node hostnames: proxstore11 / proxstore12 / proxstore13
Via CLI I've reweighted osd.48 to 0.0.
Once PG count = 0 I've used Proxmox GUI to mark the OSD out and stop the ceph-osd@48 instance.
osd.48 is now shown as down/out in GUI (with overall Ceph health status still OK), "Destroy" button is no longer greyed out, I use it, keep "Cleanup Disks" option enabled, but clicking "Remove" leads to an immediate error 500 "internal error: duplicate hostname found: proxstore11"
Why does pveceph think so? Which file should I check for duplicates?
Thanks,
Patrick
I want to take an old, worn-out SSD out of a 3-node Ceph Reef cluster of ours, osd.48.
Node hostnames: proxstore11 / proxstore12 / proxstore13
Via CLI I've reweighted osd.48 to 0.0.
Once PG count = 0 I've used Proxmox GUI to mark the OSD out and stop the ceph-osd@48 instance.
osd.48 is now shown as down/out in GUI (with overall Ceph health status still OK), "Destroy" button is no longer greyed out, I use it, keep "Cleanup Disks" option enabled, but clicking "Remove" leads to an immediate error 500 "internal error: duplicate hostname found: proxstore11"
Why does pveceph think so? Which file should I check for duplicates?
Thanks,
Patrick