Ceph Failure

picchiosat

New Member
May 17, 2024
1
0
1
Good morning, I have a cluster with three nodes where I had created a ceph datastore consisting of 16 disks. After a power failure ceph no longer worked. I managed to reinstall ceph on the nodes but now I can't add the HDDs because they are "marked" OSD.0/15 and are not clean. Is it possible to recreate the volume without losing data? Thank you