#rbd

  1. S

    how to achive ceph pool allocating osds to specific ceph pool

    I have 20 OSDs (HDDs) in my Ceph cluster and want to allocate 15 OSDs to one pool and 5 OSDs to another pool without causing downtime, as this is a production environment. Can someone provide guidance on how to achieve this safely and efficiently, considering my limited experience with Ceph?
  2. J

    Ceph RBD Storage Shrinking Over Time – From 10TB up to 8.59TB

    I have a cluster with three Proxmox servers connected via Ceph. Since the beginning, the effective storage was 10TB, but over time, it has decreased to 8.59TB, and I don’t know why. The filesystem is RBD. Why is my Ceph RBD storage shrinking? How can I reclaim lost space?