Understanding Ceph free space

Gert

Member
Jul 27, 2015
16
1
23
45
Centurion, South Africa
www.huge.co.za
I have a 3 node cluster using ceph with the following OSD configuration:

node1:
3x 400GB SSD (465GB Usable)
2x 4TB HDD (3.64GB Usable)

node2:
3x 500GB SSD (465GB Usable)
2x 4TB HDD (3.64GB Usable)

node3:
3x 500GB SSD (465GB Usable)
2x 3TB HDD (2.73GB Usable)

I have created 2 pools based on SSD and HDD classes with size 3/2. ceph -df show the following:

RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 20 TiB 14 TiB 5.8 TiB 5.8 TiB 28.83
ssd 4.1 TiB 2.3 TiB 1.8 TiB 1.8 TiB 43.94
TOTAL 24 TiB 17 TiB 7.5 TiB 7.6 TiB 31.40

POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL
ceph-hdd 3 1.9 TiB 566.34k 5.8 TiB 32.89 3.9 TiB
ceph-ssd 4 624 GiB 236.34k 1.8 TiB 50.86 589 GiB

Just looking at the SSD drives, 465GB x 9 gives 4185GB which explains the SSD Total Size of 4.1TB and the % used is 43.94%. However on the ceph-ssd pool the % used is 50.86%, and in the proxmox GUI it shows 75.61% used for the storage (1.79 TiB of 2.36 TiB)

Why is there a difference is the % used?

Also where does the 2.36TB come from in the GUI?
 

Attachments

  • Capture.PNG
    Capture.PNG
    11.9 KB · Views: 4
  • Capture2.PNG
    Capture2.PNG
    18.5 KB · Views: 5
  • Capture3.PNG
    Capture3.PNG
    21.6 KB · Views: 5
OK, so the GUI now agrees with the usage of POOLS in ceph df "66.50% (1.05 TiB of 1.59 TiB)", however, I still don't understand why the total capacity of the ceph-ssd pool is only 1.58TB. Is it not supposed to be 1.8TB, the same as in RAW STORAGE? (5.4 / 3 = 1.8)

I now have 12x 500GB (465GB usable) on 3 nodes.

RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 22 TiB 15 TiB 7.2 TiB 7.2 TiB 32.86
ssd 5.4 TiB 2.8 TiB 2.7 TiB 2.7 TiB 49.31
TOTAL 27 TiB 17 TiB 9.8 TiB 9.9 TiB 36.15

POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL
ceph-hdd 3 2.4 TiB 684.74k 7.2 TiB 35.31 4.4 TiB
ceph-ssd 4 1.0 TiB 350.02k 2.7 TiB 62.46 548 GiB
 
Last edited:
Is it not supposed to be 1.8TB, the same as in RAW STORAGE? (5.4 / 3 = 1.8)
No exactly. The distribution of data on each OSD needs to be taken into account as well. The pool will stop writes, when one OSD reached the 'osd full' state. Therefore the first OSD that may reach this fill level will determine the amount that could be stored in a pool.
 
I see, thank you for the clarification.

root@node1:~# ceph osd df | grep ssd
0 ssd 0.45409 1.00000 465 GiB 243 GiB 242 GiB 4.1 MiB 1.3 GiB 222 GiB 52.22 1.44 68 up
1 ssd 0.45409 1.00000 465 GiB 191 GiB 189 GiB 3.2 MiB 1.1 GiB 274 GiB 40.99 1.13 53 up
2 ssd 0.45409 1.00000 465 GiB 218 GiB 217 GiB 3.7 MiB 1.2 GiB 247 GiB 46.87 1.29 61 up
6 ssd 0.45409 1.00000 465 GiB 279 GiB 278 GiB 4.6 MiB 1.1 GiB 186 GiB 60.00 1.66 74 up
7 ssd 0.45409 1.00000 465 GiB 216 GiB 215 GiB 3.4 MiB 1021 MiB 249 GiB 46.44 1.28 57 up
15 ssd 0.45409 1.00000 465 GiB 227 GiB 226 GiB 4.1 MiB 1.3 GiB 238 GiB 48.92 1.35 69 up
16 ssd 0.45409 1.00000 465 GiB 226 GiB 225 GiB 4.1 MiB 1.3 GiB 239 GiB 48.57 1.34 69 up
17 ssd 0.45409 1.00000 465 GiB 200 GiB 199 GiB 3.7 MiB 1.2 GiB 265 GiB 42.98 1.19 61 up
5 ssd 0.45409 1.00000 465 GiB 294 GiB 293 GiB 4.7 MiB 1.2 GiB 171 GiB 63.26 1.75 78 up
8 ssd 0.45409 1.00000 465 GiB 178 GiB 177 GiB 3.0 MiB 1021 MiB 287 GiB 38.22 1.05 47 up
9 ssd 0.45409 1.00000 465 GiB 306 GiB 305 GiB 5.0 MiB 1.3 GiB 159 GiB 65.88 1.82 81 up
10 ssd 0.45409 1.00000 465 GiB 189 GiB 188 GiB 3.1 MiB 1021 MiB 276 GiB 40.72 1.12 50 up
 
I believe my PG count is fine. I ran the balancer in unmap mode and the capacity increased from 1.59TB to 1.8TB

It is still busy but al most done and it looks much better now:

root@node1:~# ceph osd df | grep ssd
0 ssd 0.45409 1.00000 465 GiB 229 GiB 228 GiB 4.1 MiB 1.3 GiB 236 GiB 49.23 1.35 64 up
1 ssd 0.45409 1.00000 465 GiB 238 GiB 237 GiB 4.0 MiB 1.2 GiB 227 GiB 51.17 1.40 64 up
2 ssd 0.45409 1.00000 465 GiB 231 GiB 230 GiB 3.9 MiB 1.2 GiB 234 GiB 49.67 1.36 64 up
6 ssd 0.45409 1.00000 465 GiB 242 GiB 241 GiB 4.6 MiB 1.4 GiB 223 GiB 52.15 1.43 64 up
7 ssd 0.45409 1.00000 465 GiB 246 GiB 245 GiB 3.9 MiB 1.1 GiB 219 GiB 52.98 1.45 64 up
15 ssd 0.45409 1.00000 465 GiB 212 GiB 211 GiB 4.3 MiB 1.2 GiB 253 GiB 45.58 1.25 64 up
16 ssd 0.45409 1.00000 465 GiB 210 GiB 209 GiB 4.3 MiB 1.6 GiB 255 GiB 45.22 1.24 64 up
17 ssd 0.45409 1.00000 465 GiB 213 GiB 212 GiB 3.9 MiB 1.1 GiB 252 GiB 45.83 1.26 64 up
5 ssd 0.45409 1.00000 465 GiB 241 GiB 240 GiB 4.5 MiB 1.1 GiB 224 GiB 51.85 1.42 64 up
8 ssd 0.45409 1.00000 465 GiB 251 GiB 250 GiB 3.9 MiB 1020 MiB 214 GiB 54.05 1.48 64 up
9 ssd 0.45409 1.00000 465 GiB 242 GiB 241 GiB 4.5 MiB 1.1 GiB 223 GiB 52.05 1.43 64 up
10 ssd 0.45409 1.00000 465 GiB 250 GiB 249 GiB 3.9 MiB 1.0 GiB 215 GiB 53.82 1.48 64 up
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!