ceph crash recovery desaster

  1. I

    [SOLVED] ceph problem - Reduced data availability: 15 pgs inactive

    proxmox 7.1-8 yesterday i executed a large delete operation on the ceph-fs pool (around 2 TB of data) the operation ended withing few seconds successful (without any noticeable errors). and then the following problem occurred: 7 out of 32 osds went to down and out. trying to set them in and...
  2. A

    CEPH PG Data Recovery / PG Down

    I have a 2 Node CEPH cluster for a Pool. The data is in 2 replica mode. I took down one of the node for maintenance whereas the other node was working. Later there was a power outage which caused the second node to restart but the problem is it had RAID card caching enabled and the battery...
  3. C

    Add OSD's to new Cluster

    Hi guys, I had a power outage that caused me to re-image one of the two servers in my ProxMox cluster. After re-imaging I wasn't able to get the monitors or managers to come up. I ended up wiping the monitors/managers. I now have a new Ceph cluster with new monitors/managers but no pool(s)...
  4. T

    Ceph Crash recovery

    Hallo zusammen, ich bin gerade am Testen Proxmox 4.4 HA mit Ceph Hammer. Das ganze im 3 Node Cluster. Derzeit bin ich bei Crash Tests. Da ich kein Ceph-Forum gefunden habe, probiere ich es mal hier. Die Einstellungen des RBD sind 3/2 bzw. 3/1. Dabei fiel auf: Wenn 2 der 3 Nodes down sind...