Ceph Available Storage Issue

Bahjons

Active Member
Aug 8, 2018
3
0
41
37
I have 2 ceph pools. One for SATA and one for SSDs.

I was getting to an uncomfortable usage level, so I started to replace my smaller 500GB SSDs with 1TB SSDs. I've replaced more than 8 SSDs so far, and only have 4 left to replace.

However, the usage percentage and available has not changed. What am I doing wrong?

Here's the output for ceph df and ceph osd df tree

Code:
root@cn201:~# ceph df
GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED
    20617G     13208G        7408G         35.93
POOLS:
    NAME          ID     USED      %USED     MAX AVAIL     OBJECTS
    ssdpool1      1      1535G     70.12          654G      393286
    satapool1     2       928G     56.03          728G      238195
root@cn201:~# ceph osd df tree
ID  CLASS WEIGHT   REWEIGHT SIZE    USE     AVAIL   %USE  VAR  PGS TYPE NAME           
-20        7.00000        -      0B      0B      0B     0    0   - root sata           
-17        1.00000        -  931GiB  391GiB  540GiB 42.03 1.17   -     host cn201-sata
 17   hdd  1.00000  1.00000  931GiB  391GiB  540GiB 42.03 1.17  54         osd.17     
-18              0        -      0B      0B      0B     0    0   -     host cn202-sata
-19        3.00000        - 2.73TiB  809GiB 1.94TiB 28.95 0.81   -     host cn203-sata
  3   hdd  1.00000  1.00000  931GiB  313GiB  619GiB 33.59 0.93  43         osd.3       
  4   hdd  1.00000  1.00000  931GiB  220GiB  711GiB 23.63 0.66  30         osd.4       
  5   hdd  1.00000  1.00000  931GiB  276GiB  656GiB 29.61 0.82  38         osd.5       
-41        1.00000        -  931GiB  523GiB  408GiB 56.18 1.56   -     host cn205-sata
 12   hdd  1.00000  1.00000  931GiB  523GiB  408GiB 56.18 1.56  72         osd.12     
-47        1.00000        -  931GiB  495GiB  437GiB 53.12 1.48   -     host cn206-sata
 14   hdd  1.00000  1.00000  931GiB  495GiB  437GiB 53.12 1.48  68         osd.14     
-53        1.00000        -  931GiB  573GiB  359GiB 61.48 1.71   -     host cn207-sata
 16   hdd  1.00000  1.00000  931GiB  573GiB  359GiB 61.48 1.71  79         osd.16     
-59              0        -      0B      0B      0B     0    0   -     host cn208-sata
 -9       25.50000        - 1.40TiB  638GiB  792GiB     0    0   - root ssd           
-10        1.50000        -  931GiB  313GiB  619GiB 33.55 0.93   -     host cn201-ssd 
 11   ssd  1.50000  1.00000  931GiB  313GiB  619GiB 33.55 0.93  26         osd.11     
-11        4.50000        - 2.77TiB  758GiB 2.03TiB 26.71 0.74   -     host cn202-ssd 
  0   ssd  1.50000  1.00000  954GiB  253GiB  701GiB 26.55 0.74  21         osd.0       
 15   ssd  1.50000  1.00000  931GiB  266GiB  666GiB 28.52 0.79  22         osd.15     
 18   ssd  1.50000  1.00000  954GiB  240GiB  714GiB 25.12 0.70  20         osd.18     
-12        3.00000        - 1.40TiB  613GiB  818GiB 42.84 1.19   -     host cn203-ssd 
  2   ssd  1.50000  1.00000  954GiB  275GiB  678GiB 28.87 0.80  23         osd.2       
 10   ssd  1.50000  0.95001  477GiB  338GiB  139GiB 70.78 1.97  28         osd.10     
-37        4.50000        - 2.26TiB  746GiB 1.53TiB 32.31 0.90   -     host cn204-ssd 
  1   ssd  1.50000  1.00000  931GiB  216GiB  716GiB 23.16 0.64  18         osd.1       
  9   ssd  1.50000  1.00000  447GiB  193GiB  254GiB 43.17 1.20  16         osd.9       
 21   ssd  1.50000  1.00000  931GiB  338GiB  594GiB 36.25 1.01  28         osd.21     
-40        1.50000        -  931GiB  264GiB  668GiB 28.33 0.79   -     host cn205-ssd 
 20   ssd  1.50000  1.00000  931GiB  264GiB  668GiB 28.33 0.79  22         osd.20     
-46        1.50000        -  931GiB  334GiB  597GiB 35.87 1.00   -     host cn206-ssd 
  7   ssd  1.50000  1.00000  931GiB  334GiB  597GiB 35.87 1.00  28         osd.7       
-52        6.00000        - 3.22TiB  952GiB 2.29TiB 28.89 0.80   -     host cn207-ssd 
  6   ssd  1.50000  1.00000  931GiB  338GiB  594GiB 36.26 1.01  28         osd.6       
  8   ssd  1.50000  1.00000  931GiB  170GiB  761GiB 18.28 0.51  14         osd.8       
 19   ssd  1.50000  1.00000  954GiB  251GiB  703GiB 26.31 0.73  21         osd.19     
 22   ssd  1.50000  1.00000  477GiB  193GiB  284GiB 40.39 1.12  16         osd.22     
-58              0        -      0B      0B      0B     0    0   -     host cn208-ssd 
-64        3.00000        - 1.40TiB  638GiB  792GiB 44.61 1.24   -     host cn209-ssd 
 23   ssd  1.50000  1.00000  954GiB  324GiB  630GiB 33.97 0.95  27         osd.23     
 24   ssd  1.50000  1.00000  477GiB  314GiB  163GiB 65.89 1.83  26         osd.24     
 -1              0        -      0B      0B      0B     0    0   - root default       
 -5              0        -      0B      0B      0B     0    0   -     host cn201     
 -3              0        -      0B      0B      0B     0    0   -     host cn202     
 -7              0        -      0B      0B      0B     0    0   -     host cn203     
                      TOTAL 20.1TiB 7.24TiB 12.9TiB 35.93                             
MIN/MAX VAR: 0.51/1.97  STDDEV: 13.95
 
Try adding more pg's now that you have larger drives. This is what I had to do when ceph pools was eventually filling up...
 
You should upgrade your Nodes so that all Nodes with SATA and SSDs have the same amount of drives. It depends on the replica of your Pools, so you will get the best load of all drives.

Check the weight of your OSDs, so you have an 500GB SSD with a weight of 1.5, normally you should not set it higher then the amount of space. For example an 1TB SSD has a weight of 0,909988 when you add it to PVE, and thats ok.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!