Search results

  1. A

    Understanding Ceph

    If your also using the default pool then your issue may very much be the same.
  2. A

    Understanding Ceph

    Didn't realise was two separate things going missed the OP. However to OP, as just suggested the default pool created has far too small of a PG value, your want this around 200 per an OSD if your not looking to expand further shortly. If your not using the storage in production yet your...
  3. A

    Understanding Ceph

    As there is no osd's you can just remove the entry from your crushmap. Do you have the output of ceph -w now in the healthy state?
  4. A

    Understanding Ceph

    What did ceph -w say when the 2 OSD's are down if you have a copy? I would also remove the empty host from the crushmap if you have no plan to use it going forward.
  5. A

    Understanding Ceph

    What does your crush map look like?