My POOLS MAX AVAIL is not full capacity in version 5.2

adamjosh

Member
Oct 3, 2016
6
0
21
49
Taiwan
Hello,

I just completed a new setup of Proxmox version 5.2 with 3 hosts and 18 OSDs. This time my cluster setup is not as previous by manual command line in 5.2 version installation. I use GUI to complete my cluster setup, awesome :)


While I finished ceph-pool setup with following:
Size/min Pg_num
3/2 512

Then my RBD is not shown the full capacity with 18 OSDs. The global available is 5008G, but pool MAX AVAIL is only "1585G".

Here I list "ceph df" and "ceph osd df tree" as following. Do I configure the wrong pool or ?


root@px13:~# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
5026G 5008G 18693M 0.36
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 6 0 0 1585G 0
root@px13:~# ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS TYPE NAME
-1 4.90842 - 5026G 18693M 5008G 0.36 1.00 - root default
-3 1.63614 - 1675G 6231M 1669G 0.36 1.00 - host px13
0 hdd 0.27269 1.00000 279G 1038M 278G 0.36 1.00 89 osd.0
1 hdd 0.27269 1.00000 279G 1038M 278G 0.36 1.00 81 osd.1
2 hdd 0.27269 1.00000 279G 1038M 278G 0.36 1.00 77 osd.2
3 hdd 0.27269 1.00000 279G 1038M 278G 0.36 1.00 79 osd.3
4 hdd 0.27269 1.00000 279G 1038M 278G 0.36 1.00 91 osd.4
5 hdd 0.27269 1.00000 279G 1038M 278G 0.36 1.00 95 osd.5
-5 1.63614 - 1675G 6231M 1669G 0.36 1.00 - host px14
6 hdd 0.27269 1.00000 279G 1038M 278G 0.36 1.00 108 osd.6
7 hdd 0.27269 1.00000 279G 1038M 278G 0.36 1.00 77 osd.7
8 hdd 0.27269 1.00000 279G 1038M 278G 0.36 1.00 80 osd.8
9 hdd 0.27269 1.00000 279G 1038M 278G 0.36 1.00 82 osd.9
10 hdd 0.27269 1.00000 279G 1038M 278G 0.36 1.00 80 osd.10
11 hdd 0.27269 1.00000 279G 1038M 278G 0.36 1.00 85 osd.11
-7 1.63614 - 1675G 6231M 1669G 0.36 1.00 - host px15
12 hdd 0.27269 1.00000 279G 1038M 278G 0.36 1.00 84 osd.12
13 hdd 0.27269 1.00000 279G 1038M 278G 0.36 1.00 82 osd.13
14 hdd 0.27269 1.00000 279G 1038M 278G 0.36 1.00 70 osd.14
15 hdd 0.27269 1.00000 279G 1038M 278G 0.36 1.00 100 osd.15
16 hdd 0.27269 1.00000 279G 1038M 278G 0.36 1.00 75 osd.16
17 hdd 0.27269 1.00000 279G 1038M 278G 0.36 1.00 101 osd.17
TOTAL 5026G 18693M 5008G 0.36
MIN/MAX VAR: 1.00/1.00 STDDEV: 0
root@px13:~#
 

Attachments

  • Screen Shot 2018-05-30 at 7.29.34 PM.png
    Screen Shot 2018-05-30 at 7.29.34 PM.png
    178 KB · Views: 12
  • Screen Shot 2018-05-30 at 7.29.50 PM.png
    Screen Shot 2018-05-30 at 7.29.50 PM.png
    224.3 KB · Views: 12
279*18/3 = 1674 - (used+overhead) = ~1600G i do not see the problem ?
 
You use a replication of 3, this means your available space is divided by 3 and this is then your usable space for data. Every OSD, by default, has a overhead of 1.5 GB, as the DB+WAL are counted too.
 
Dear dcsapak and Alwin,

Thank you for your explanation, my confusion is that my other version 5.1 set with the same SPEC and resource, but the pool capacity is shown 4.91 TiB with 18 OSDs. Please refer the following. Is my 5.2 of pool setting right ?


root@px16:~# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
5026G 3888G 1137G 22.64
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 1 372G 9.70 1157G 96535
root@px16:~# ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS TYPE NAME
-1 4.90842 - 5026G 1137G 3888G 22.64 1.00 - root default
-3 1.63614 - 1675G 379G 1296G 22.63 1.00 - host px16
0 hdd 0.27269 1.00000 279G 68076M 212G 23.81 1.05 90 osd.0
1 hdd 0.27269 1.00000 279G 67310M 213G 23.54 1.04 89 osd.1
2 hdd 0.27269 1.00000 279G 65616M 215G 22.95 1.01 86 osd.2
4 hdd 0.27269 1.00000 279G 70296M 210G 24.58 1.09 93 osd.4
6 hdd 0.27269 1.00000 279G 61159M 219G 21.39 0.94 81 osd.6
17 hdd 0.27269 1.00000 279G 55775M 224G 19.50 0.86 73 osd.17
-5 1.63614 - 1675G 379G 1296G 22.65 1.00 - host px17
3 hdd 0.27269 1.00000 279G 60736M 219G 21.24 0.94 81 osd.3
5 hdd 0.27269 1.00000 279G 74208M 206G 25.95 1.15 98 osd.5
7 hdd 0.27269 1.00000 279G 64574M 216G 22.58 1.00 85 osd.7
9 hdd 0.27269 1.00000 279G 70214M 210G 24.55 1.08 92 osd.9
10 hdd 0.27269 1.00000 279G 59575M 221G 20.83 0.92 78 osd.10
11 hdd 0.27269 1.00000 279G 59388M 221G 20.77 0.92 78 osd.11
-7 1.63614 - 1675G 379G 1296G 22.63 1.00 - host px18
8 hdd 0.27269 1.00000 279G 61103M 219G 21.37 0.94 81 osd.8
12 hdd 0.27269 1.00000 279G 67382M 213G 23.56 1.04 89 osd.12
13 hdd 0.27269 1.00000 279G 62528M 218G 21.87 0.97 83 osd.13
14 hdd 0.27269 1.00000 279G 69854M 211G 24.43 1.08 92 osd.14
15 hdd 0.27269 1.00000 279G 64715M 216G 22.63 1.00 85 osd.15
16 hdd 0.27269 1.00000 279G 62636M 218G 21.90 0.97 82 osd.16
TOTAL 5026G 1137G 3888G 22.64
MIN/MAX VAR: 0.86/1.15 STDDEV: 1.63
root@px16:~#
 

Attachments

  • Screen Shot 2018-05-31 at 12.42.18 AM.png
    Screen Shot 2018-05-31 at 12.42.18 AM.png
    205.6 KB · Views: 5
  • Screen Shot 2018-05-31 at 12.42.36 AM.png
    Screen Shot 2018-05-31 at 12.42.36 AM.png
    189.4 KB · Views: 5
In each host's storage, there is 1.55 TiB Usage, but totally all my VMs can consume RDB max capacity with 1.55TiB x 3 times, am I right ?
you have replica 3 which means if you write 1GB of data, it gets replicated across the cluster 3 times and uses 3GB
the MAX_AVAIL is the maximum usable storage not the raw one
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!