Ceph pool does not use all available storage

Jul 4, 2018
6
0
6
21
Hi there.

I have the following issue and need to know if this is normal behaviour.

My ceph storage size is 142TiB. I have set a pool using all osds with replication 3 but the available space on pool is 36TiB. There are missing around 30TiB. Are these needed for the service to function?
 
what does 'ceph df' show?
 
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
142TiB 52.7TiB 89.3TiB 62.90
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 24.2GiB 0.30 7.89TiB 6186
rbd-sata 1 28.9TiB 78.55 7.89TiB 8240600


I am reffering to rbd-sata and if it is of any help, all disks are ssd despite the pool name.
 
Last edited:
What does ceph osd df tree show?
 
CEPH shows free space worked out on the most full OSD's.

As you can see from your OSD tree you have some in the high 70% / 80%, therefore, ceph df showing a used of 70%~ is correct.

If your able to rebalance your OSDs better you would see your available space become slightly more, however, CEPH has its overhead and you should never really aim to go above 60%, if you lose a host right now there is a good chance you would end up with full OSD's and a pause on all I/O.

However your only ever going to get a max pool size of 47.3TiB (142TiB/3), so your not too far off perfect (losing about 10TiB currently)
 
Hi,

I have added another node with 6*4TiB disks. Before adding the OSDs I had ~39.40TiB. After adding the new node the pool has gone to 38.74TiB.

How is that even possible?

Thanks in advance.
 

Attachments

Target PG per OSD should be around 100.