[SOLVED] ceph osd df not matching ceph df

jsterr

Well-Known Member
Jul 24, 2020
699
177
53
32
Hi, on a test-cluster I noticed that ceph osd df is showing more %USE then ceph df on the pool.
We do lots of rados bench (I also deleted the testdata with rados cleanup -p vm_nvme, and I also played around with Terraform a little but cluster is nearly empty but osds are showing 75-85% usage. Bug?

Code:
root@PMX5:~# ceph osd df
ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP     META     AVAIL    %USE  VAR   PGS  STATUS
 0   nvme  0.87329   1.00000  894 GiB  7.1 GiB  6.3 GiB   42 KiB  811 MiB  887 GiB  0.80  1.10  118      up
 1   nvme  0.87329   1.00000  894 GiB  7.6 GiB  7.0 GiB   79 KiB  603 MiB  887 GiB  0.85  1.17  120      up
 2   nvme  0.87329   1.00000  894 GiB  4.1 GiB  3.3 GiB   68 KiB  784 MiB  890 GiB  0.46  0.63  111      up
 3   nvme  0.87329   1.00000  894 GiB  6.6 GiB  5.4 GiB   35 KiB  1.1 GiB  888 GiB  0.73  1.01  120      up
13   nvme  0.72769   1.00000  745 GiB  6.2 GiB  5.6 GiB   67 KiB  629 MiB  739 GiB  0.83  1.14  108      up
 4   nvme  0.87329   1.00000  894 GiB  8.1 GiB  7.4 GiB   63 KiB  663 MiB  886 GiB  0.90  1.24  122      up
 5   nvme  0.87329   1.00000  894 GiB  6.4 GiB  5.7 GiB   78 KiB  697 MiB  888 GiB  0.71  0.98  120      up
 6   nvme  0.87329   1.00000  894 GiB  5.3 GiB  4.7 GiB   56 KiB  604 MiB  889 GiB  0.59  0.82  111      up
 7   nvme  0.87329   1.00000  894 GiB  6.2 GiB  5.5 GiB   43 KiB  719 MiB  888 GiB  0.70  0.96  126      up
14   nvme  0.72769   1.00000  745 GiB  4.9 GiB  4.3 GiB   32 KiB  599 MiB  740 GiB  0.66  0.91   98      up
 8   nvme  0.87329   1.00000  894 GiB  5.4 GiB  4.7 GiB   57 KiB  654 MiB  889 GiB  0.60  0.83  112      up
 9   nvme  0.87329   1.00000  894 GiB  6.4 GiB  5.6 GiB   47 KiB  819 MiB  888 GiB  0.71  0.98  125      up
10   nvme  0.87329   1.00000  894 GiB  5.9 GiB  5.1 GiB   39 KiB  862 MiB  888 GiB  0.66  0.91  119      up
11   nvme  0.87329   1.00000  894 GiB  7.8 GiB  7.0 GiB   42 KiB  826 MiB  886 GiB  0.87  1.20  119      up
12   nvme  0.72769   1.00000  745 GiB  6.3 GiB  5.4 GiB   32 KiB  957 MiB  739 GiB  0.84  1.16  102      up
                       TOTAL   13 TiB   94 GiB   83 GiB  788 KiB   11 GiB   13 TiB  0.73
MIN/MAX VAR: 0.63/1.24  STDDEV: 0.12


root@PMX5:~# ceph df
--- RAW STORAGE ---
CLASS    SIZE   AVAIL    USED  RAW USED  %RAW USED
nvme   13 TiB  13 TiB  94 GiB    94 GiB       0.73
TOTAL  13 TiB  13 TiB  94 GiB    94 GiB       0.73

--- POOLS ---
POOL             ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
.mgr              1    1  3.6 MiB        2   11 MiB      0    4.0 TiB
vm_nvme           2  512   11 GiB    3.04k   31 GiB   0.26    4.0 TiB
cephfs_data       3   32   17 GiB    4.41k   52 GiB   0.42    4.0 TiB
cephfs_metadata   4   32  763 KiB       23  2.3 MiB      0    4.0 TiB
 
Last edited:
I think you read it wrong. If it really was 80 % full it would be "80.0" and not "0.80".
It is below 1 % right now.
 
Last edited:
  • Like
Reactions: jsterr
I think you read it wrong. If it really was 80 % full it would be "80.0" and not "0.80".
It is below 1 % right now.

Oh damn! Your right! Need more coffee ;-)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!