After a 3-node test Ceph cluster refused to boot after a lengthy power outage, I reinstalled Proxmox.
After creating the Ceph monitors on each host, ran 'ceph-volume lvm activate --all' on the existing OSDs which ran without errors.
However, still no OSDs. I'm guessing the new Ceph monitors don't know anything about the OSDs.
Took a look at crushmap from a different cluster and I see the HDDs listed.
Did a 'osd crush add osd.0 1.0 host=ceph1' but it says I need to create the OSD first. Well, it already exists.
What are the additional steps to get all the existing OSDs re-imported?
Thanks for the assist!
After creating the Ceph monitors on each host, ran 'ceph-volume lvm activate --all' on the existing OSDs which ran without errors.
However, still no OSDs. I'm guessing the new Ceph monitors don't know anything about the OSDs.
Took a look at crushmap from a different cluster and I see the HDDs listed.
Did a 'osd crush add osd.0 1.0 host=ceph1' but it says I need to create the OSD first. Well, it already exists.
What are the additional steps to get all the existing OSDs re-imported?
Thanks for the assist!