MAX AVAIL in ceph df command is incorrect

kiszero

New Member
Dec 13, 2018
3
0
1
32
Dear All,
I'm newbiew ceph.
Currently, I have problem with max avaible capacity of ceph.

Informaiton for pool ceph:

- pool size: 3

- full_ratio 0.95

GLOBAL:

SIZE AVAIL RAW USED %RAW USED OBJECTS

322 TiB 224 TiB 98 TiB 30.58 24.58 M


POOLS:

NAME ID QUOTA OBJECTS QUOTA BYTES USED %USED MAX AVAIL OBJECTS DIRTY READ WRITE RAW USED

.rgw.root 3 N/A N/A 6.5 KiB 0 59 TiB 19 19 21 KiB 79 B 19 KiB

default.rgw.control 4 N/A N/A 0 B 0 59 TiB 8 8 0 B 0 B 0 B

default.rgw.data.root 5 N/A N/A 0 B 0 59 TiB 0 0 0 B 0 B 0 B

default.rgw.gc 6 N/A N/A 0 B 0 59 TiB 0 0 0 B 0 B 0 B

default.rgw.log 7 N/A N/A 151 B 0 59 TiB 293 293 19 MiB 14 MiB 453 B

default.rgw.intent-log 8 N/A N/A 0 B 0 59 TiB 0 0 0 B 0 B 0 B

default.rgw.meta 9 N/A N/A 64 KiB 0 59 TiB 352 352 2.4 MiB 68 KiB 193 KiB

default.rgw.usage 10 N/A N/A 0 B 0 59 TiB 0 0 0 B 0 B 0 B

default.rgw.users.keys 11 N/A N/A 0 B 0 59 TiB 0 0 0 B 0 B 0 B

default.rgw.users.email 12 N/A N/A 0 B 0 59 TiB 0 0 0 B 0 B 0 B

default.rgw.users.swift 13 N/A N/A 0 B 0 59 TiB 0 0 0 B 0 B 0 B

default.rgw.users.uid 14 N/A N/A 0 B 0 59 TiB 0 0 0 B 0 B 0 B

default.rgw.buckets.extra 15 N/A N/A 0 B 0 59 TiB 0 0 0 B 0 B 0 B

default.rgw.buckets.index 16 N/A N/A 0 B 0 59 TiB 7842 7.84 k 282 MiB 360 MiB 0 B

default.rgw.buckets.data 17 N/A N/A 32 TiB 35.01 59 TiB 24567831 24.57 M 227 MiB 483 MiB 96 TiB

default.rgw.buckets.non-ec 18 N/A N/A 0 B 0 59 TiB 95 95 4.4 MiB 2.6 MiB 0 B


Problem: My cluster ceph has 224 TiB avaible but MAX AVAIL is 59TB. This is wrong. Correct value for in this case of MAX AVAIL is about ~ 70TB. Please help me in this case!
 
Does your cluster have even sized disks?
 
Does your cluster have even sized disks?
This is size of disks on my cluster:
[root@mon1 ~]# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS
0 hdd 5.45799 1.00000 5.5 TiB 1.6 TiB 3.8 TiB 30.09 0.98 35
1 hdd 5.45799 1.00000 5.5 TiB 1.2 TiB 4.3 TiB 21.75 0.71 28
2 hdd 5.45799 1.00000 5.5 TiB 1.3 TiB 4.2 TiB 23.96 0.78 31
3 hdd 5.45799 1.00000 5.5 TiB 2.0 TiB 3.5 TiB 35.80 1.16 37
4 hdd 5.45799 1.00000 5.5 TiB 2.0 TiB 3.5 TiB 35.93 1.17 43
5 hdd 5.45799 1.00000 5.5 TiB 1.6 TiB 3.8 TiB 30.02 0.98 32
6 hdd 5.45799 1.00000 5.5 TiB 2.0 TiB 3.4 TiB 37.08 1.21 37
21 hdd 5.45799 1.00000 5.5 TiB 1.5 TiB 3.9 TiB 27.82 0.91 32
24 hdd 5.45799 1.00000 5.5 TiB 1.4 TiB 4.1 TiB 25.32 0.82 32
27 hdd 5.45799 1.00000 5.5 TiB 1.6 TiB 3.8 TiB 30.19 0.98 40
7 hdd 5.45799 1.00000 5.5 TiB 2.0 TiB 3.5 TiB 35.73 1.16 40
8 hdd 5.45799 1.00000 5.5 TiB 1.4 TiB 4.1 TiB 25.16 0.82 34
9 hdd 5.45799 1.00000 5.5 TiB 1.6 TiB 3.9 TiB 28.95 0.94 34
10 hdd 5.45799 1.00000 5.5 TiB 1.4 TiB 4.0 TiB 26.45 0.86 29
11 hdd 5.45799 1.00000 5.5 TiB 1.7 TiB 3.8 TiB 31.18 1.01 37
12 hdd 5.45799 1.00000 5.5 TiB 1.5 TiB 3.9 TiB 28.06 0.91 46
13 hdd 5.45799 1.00000 5.5 TiB 1.6 TiB 3.8 TiB 30.06 0.98 36
22 hdd 5.45799 1.00000 5.5 TiB 1.6 TiB 3.9 TiB 28.79 0.94 33
25 hdd 5.45799 1.00000 5.5 TiB 2.0 TiB 3.5 TiB 36.00 1.17 37
28 hdd 5.45799 1.00000 5.5 TiB 1.9 TiB 3.6 TiB 34.65 1.13 32
14 hdd 5.45799 1.00000 5.5 TiB 1.4 TiB 4.0 TiB 26.28 0.86 25
15 hdd 5.45799 1.00000 5.5 TiB 1.8 TiB 3.6 TiB 33.63 1.09 43
16 hdd 5.45799 1.00000 5.5 TiB 1.3 TiB 4.2 TiB 22.91 0.75 34
17 hdd 5.45799 1.00000 5.5 TiB 1.4 TiB 4.0 TiB 26.39 0.86 38
18 hdd 5.45799 1.00000 5.5 TiB 2.2 TiB 3.3 TiB 39.45 1.28 41
19 hdd 5.45799 1.00000 5.5 TiB 2.0 TiB 3.4 TiB 37.07 1.21 37
20 hdd 5.45799 1.00000 5.5 TiB 1.7 TiB 3.8 TiB 31.09 1.01 31
23 hdd 5.45799 1.00000 5.5 TiB 2.2 TiB 3.3 TiB 39.53 1.29 44
26 hdd 5.45799 1.00000 5.5 TiB 1.6 TiB 3.8 TiB 30.16 0.98 32
29 hdd 5.45799 1.00000 5.5 TiB 2.1 TiB 3.4 TiB 38.58 1.26 44
30 hdd 5.45799 1.00000 5.5 TiB 2.0 TiB 3.5 TiB 35.85 1.17 38
31 hdd 5.45799 1.00000 5.5 TiB 2.1 TiB 3.4 TiB 38.15 1.24 40
32 hdd 5.45799 1.00000 5.5 TiB 2.0 TiB 3.5 TiB 35.96 1.17 41
33 hdd 5.45799 1.00000 5.5 TiB 1.7 TiB 3.7 TiB 31.33 1.02 36
34 hdd 5.45799 1.00000 5.5 TiB 1.8 TiB 3.6 TiB 33.48 1.09 36
35 hdd 5.45799 1.00000 5.5 TiB 1.6 TiB 3.9 TiB 28.78 0.94 30
36 hdd 5.45799 1.00000 5.5 TiB 1.3 TiB 4.1 TiB 23.99 0.78 28
37 hdd 5.45799 1.00000 5.5 TiB 1.4 TiB 4.1 TiB 25.20 0.82 24
38 hdd 5.45799 1.00000 5.5 TiB 1.8 TiB 3.7 TiB 32.59 1.06 36
41 hdd 5.45799 1.00000 5.5 TiB 1.2 TiB 4.3 TiB 21.95 0.71 31
39 hdd 5.45799 1.00000 5.5 TiB 1.2 TiB 4.2 TiB 22.82 0.74 28
40 hdd 5.45799 1.00000 5.5 TiB 2.0 TiB 3.4 TiB 37.25 1.21 40
42 hdd 5.45799 1.00000 5.5 TiB 1.8 TiB 3.7 TiB 32.51 1.06 38
43 hdd 5.45799 1.00000 5.5 TiB 1.6 TiB 3.9 TiB 28.74 0.94 29
44 hdd 5.45799 1.00000 5.5 TiB 1.9 TiB 3.6 TiB 34.79 1.13 39
45 hdd 5.45799 1.00000 5.5 TiB 1.5 TiB 4.0 TiB 27.61 0.90 32
46 hdd 5.45799 1.00000 5.5 TiB 2.0 TiB 3.4 TiB 37.22 1.21 37
47 hdd 5.45799 1.00000 5.5 TiB 1.7 TiB 3.8 TiB 31.24 1.02 34
48 hdd 5.45799 1.00000 5.5 TiB 1.9 TiB 3.6 TiB 34.73 1.13 38
49 hdd 5.45799 1.00000 5.5 TiB 1.4 TiB 4.1 TiB 25.10 0.82 25
50 hdd 5.45799 1.00000 5.5 TiB 1.4 TiB 4.0 TiB 26.26 0.85 29
51 hdd 5.45799 1.00000 5.5 TiB 2.0 TiB 3.4 TiB 37.51 1.22 39
52 hdd 5.45799 1.00000 5.5 TiB 1.1 TiB 4.3 TiB 20.38 0.66 26
53 hdd 5.45799 1.00000 5.5 TiB 1.9 TiB 3.6 TiB 34.54 1.12 38
54 hdd 5.45799 1.00000 5.5 TiB 1.6 TiB 3.8 TiB 30.07 0.98 32
55 hdd 5.45799 1.00000 5.5 TiB 2.0 TiB 3.5 TiB 35.91 1.17 43
56 hdd 5.45799 1.00000 5.5 TiB 1.4 TiB 4.1 TiB 25.05 0.82 25
57 hdd 5.45799 1.00000 5.5 TiB 1.1 TiB 4.3 TiB 20.41 0.66 25
58 hdd 5.45799 1.00000 5.5 TiB 2.0 TiB 3.5 TiB 35.87 1.17 40
59 hdd 5.45799 1.00000 5.5 TiB 1.7 TiB 3.8 TiB 30.31 0.99 37
TOTAL 327 TiB 101 TiB 227 TiB 30.73
MIN/MAX VAR: 0.66/1.29 STDDEV: 5.23
=> As information above. I wonder that the "MAX AVAIL" in ceph df is incorrect or not? Pls help me explaint in this case.
 
=> As information above. I wonder that the "MAX AVAIL" in ceph df is incorrect or not? Pls help me explaint in this case.
Well, that's what I am trying to do. Aside that we don't support RGW on our stack. And please post command line output with CODE tags (can be found under the little plus).

You have 60 OSDs, with overhead (~4% for block.db as guideline) for the WAL/DB (assuming OSDs are Bluestore), ~187GB / OSD in your case. Those GB are not subtracted from the global RAW value, they may grow in size too.

Code:
ceph daemon osd.0 perf dump | grep db
You chan check DB size per OSD with this command.

Your data distribution is between 1.2 - 2.2 TB, with a standard deviation of 5.23 (high value) the cluster is not even balanced. This leaves me with the question, are you having enough PGs? The PG calculation gives me 2432 PGs for the whole cluster.
https://ceph.com/pgcalc/
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!