[SOLVED] Ceph showing 50% usage but all pools are empty

G0ldmember

Active Member
Oct 2, 2019
32
4
28
Germany
I played around with a nested proxmox instance and set up a ceph cluster there with 3 nodes and 3 OSDs.

ceph df shows 50% Usage although all the pools are empty.

Can I clean that up somehow?

Code:
# ceph df
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
hdd    600 GiB  300 GiB  300 GiB   300 GiB      50.05
TOTAL  600 GiB  300 GiB  300 GiB   300 GiB      50.05
 
--- POOLS ---
POOL                  ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
.mgr                   1    1  577 KiB        2  1.7 MiB      0     90 GiB
cephfs_data_data       2   32      0 B        0      0 B      0     90 GiB
cephfs_data_metadata   3   32   36 KiB       22  194 KiB      0     90 GiB
volumes                5   32      0 B        0      0 B      0     90 GiB

Code:
# rados df
POOL_NAME                USED  OBJECTS  CLONES  COPIES  MISSING_ON_PRIMARY  UNFOUND  DEGRADED  RD_OPS       RD  WR_OPS       WR  USED COMPR  UNDER COMPR
.mgr                  1.7 MiB        2       0       6                   0        0         0     539  1.0 MiB     419  6.4 MiB         0 B          0 B
cephfs_data_data          0 B        0       0       0                   0        0         0       0      0 B       0      0 B         0 B          0 B
cephfs_data_metadata  194 KiB       22       0      66                   0        0         0      73  117 KiB     110   81 KiB         0 B          0 B
volumes                   0 B        0       0       0                   0        0         0       0      0 B       0      0 B         0 B          0 B

total_objects    24
total_used       300 GiB
total_avail      300 GiB
total_space      600 GiB
 
Hello could you provide us with the output of ceph osd df tree and ceph -s?

Did the OSDs contain any data before? In this case they could still be catching up.
 
Hello could you provide us with the output of ceph osd df tree and ceph -s?

Did the OSDs contain any data before? In this case they could still be catching up.
I resized the OSDs from 100G to 200G each. Maybe it has to do with that? Cluster state looks healthy actually.


Code:
# ceph osd df tree
ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP    META     AVAIL    %USE   VAR   PGS  STATUS  TYPE NAME     
-1         0.29306         -  600 GiB  300 GiB   25 MiB  51 KiB  266 MiB  300 GiB  50.05  1.00    -          root default   
-5         0.09769         -  200 GiB  100 GiB  8.3 MiB  17 KiB  140 MiB  100 GiB  50.07  1.00    -              host pvetest1
 1    hdd  0.09769   1.00000  200 GiB  100 GiB  8.3 MiB  17 KiB  140 MiB  100 GiB  50.07  1.00   97      up          osd.1 
-3         0.09769         -  200 GiB  100 GiB  8.3 MiB  17 KiB   63 MiB  100 GiB  50.04  1.00    -              host pvetest2
 0    hdd  0.09769   1.00000  200 GiB  100 GiB  8.3 MiB  17 KiB   63 MiB  100 GiB  50.04  1.00   97      up          osd.0 
-7         0.09769         -  200 GiB  100 GiB  8.3 MiB  17 KiB   64 MiB  100 GiB  50.04  1.00    -              host pvetest3
 2    hdd  0.09769   1.00000  200 GiB  100 GiB  8.3 MiB  17 KiB   64 MiB  100 GiB  50.04  1.00   97      up          osd.2 
                       TOTAL  600 GiB  300 GiB   25 MiB  52 KiB  266 MiB  300 GiB  50.05                                   
MIN/MAX VAR: 1.00/1.00  STDDEV: 0.02

Code:
# ceph -s
  cluster:
    id:     906f161d-bb2e-4fc9-b23f-8def930e3e49
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum pvetest1,pvetest2,pvetest3 (age 24m)
    mgr: pvetest3(active, since 24m), standbys: pvetest1, pvetest2
    mds: 1/1 daemons up, 2 standby
    osd: 3 osds: 3 up (since 24m), 3 in (since 3d)
 
  data:
    volumes: 1/1 healthy
    pools:   4 pools, 97 pgs
    objects: 24 objects, 600 KiB
    usage:   300 GiB used, 300 GiB / 600 GiB avail
    pgs:     97 active+clean

Maybe I just have to remove the OSDs and recreate them?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!