Ceph usable space reporting

kellogs

Active Member
May 14, 2024
233
35
28
I have a ceph cluster with 5 nodes. each node has 10x OSD and each OSD has 3.84TB space and replicate set to 3. Based on this I should have 64TB usable space
1736808875962.png

But when i looked at Ceph reporting from the proxmox gui it said this

1736808907867.png

why the difference?
 
Last edited:
You should also keep in mind that you should never have more than 80% used in a ceph pool AND that you should leave space on each node so that a failed OSD can spill over to the other OSDs if you don't want to have a failed node.

To get more out of the system you currently have, consider using compression (e.g. lz4) to compress your data. This is negligible for performance, yet will increase the data stored on your system.
 
yes I read about the 80% and I am planning to add 2 more ceph nodes with the exact same hardware spec. But I still don’t know why the reading is different