active+recovery_wait+degraded

  1. L

    Ceph 75% degraded with only one host down of 4

    Hi all i am struggling to find the reason why my ceph cluster goes into 75% degraded (as seen in the screenshot above) when i reboot just one node. The 4 node cluster is new, with no Vm or container so the used space is 0. Each of the node contains an equal number of SSD OSDs(6 x 465gb)...
  2. J

    [SOLVED] Proxmox Ceph - After power failure

    Hi, Today there was an unexpected power outage where my servers are co-located, the entire datacenter went dark. Luckily I had fresh backups to simply restore for the most part. However, I have an issue with one OSD on one server, the OSD is stuck in "active+recovery_wait+degraded" I have...