Hey guys,
I have a ceph cluster with 3 OSDs on 3 nodes, 1 osd each node. 2 of the osds went offline and won't come back (pretty sure the disks died). 1 OSD is still alive with monitor. I can see the data from ceph -s:
id: c42a9057-9b43-4e68-afe8-d2cac60a8a6c
health: HEALTH_WARN...