Ceph external pool wrong used space

ddkargas

Renowned Member
Dec 13, 2014
20
0
66
46
Greece
www.uowm.gr
Hi, i have installed two cluster with proxmox, one for vm's only and one only as ceph cluster
the ceph cluster connected as external rbd to vm running proxmox,
after create a new pool in ceph cluster and move all to new pool, connect this pool to px but
i have wrong size in used space

with ceph df in vm proxmox get this

Code:
RAW STORAGE:
    CLASS     SIZE       AVAIL      USED       RAW USED     %RAW USED
    hdd       96 TiB     61 TiB     35 TiB       35 TiB         36.84
    TOTAL     96 TiB     61 TiB     35 TiB       35 TiB         36.84

POOLS:
    POOL         ID     STORED     OBJECTS     USED       %USED     MAX AVAIL
    ceph_vms     41     11 TiB       2.97M     34 TiB     38.79        18 TiB

but in gui get for this pool

Usage -> 65.53% (33.71 TiB of 51.44 TiB)

Im do something wrong? Can some explain this for me;

Thanks
 
You have write but i think there is an error. Take a look. I think this is ubnormal

I think the right usage must be 37% and not 65%

have the same error on usage storage space

in ceph df details from proxmox ceph cluster is

Code:
# ceph df detail

RAW STORAGE:
CLASS   SIZE   AVAIL    USED   RAW USED  %RAW USED
hdd   96 TiB  61 TiB  35 TiB     35 TiB      36.84
TOTAL 96 TiB  61 TiB  35 TiB     35 TiB      36.84

POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR
ceph_vms 41 11 TiB 2.97M 34 TiB 38.80 18 TiB N/A N/A 2.97M 0 B 0 B

In proxmox vm cluster export rados df

Code:
# rados df

POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR USED COMPR UNDER COMPR
ceph_vms 34 TiB 2974289 2838 8922867 0 0 0 156075123 17 TiB 26474392 12 TiB 0 B 0 B

total_objects 2974289
total_used 35 TiB
total_avail 61 TiB
total_space 96 TiB


But in gui pool details export is

Καταγραφή.PNG
 
Which Ceph version are you running? And is your cluster healthy?
 
Which Ceph version are you running? And is your cluster healthy?

I think is all ok, 14.2.2 and yes is health_ok


cluster:
id:
health: HEALTH_OK

services:
mon: 3 daemons, quorum cmon1,cmon2,cmon3 (age 4d)
mgr: cmon1(active, since 4d), standbys: cmon2, cmon3
osd: 56 osds: 56 up (since 4d), 56 in

data:
pools: 1 pools, 2048 pgs
objects: 2.97M objects, 11 TiB
usage: 35 TiB used, 61 TiB / 96 TiB avail
pgs: 2048 active+clean




{
"mon": {
"ceph version 14.2.2 (a887fe9a5d3d97fe349065d3c1c9dbd7b8870855) nautilus (stable)": 3
},
"mgr": {
"ceph version 14.2.2 (a887fe9a5d3d97fe349065d3c1c9dbd7b8870855) nautilus (stable)": 3
},
"osd": {
"ceph version 14.2.2 (a887fe9a5d3d97fe349065d3c1c9dbd7b8870855) nautilus (stable)": 56
},
"mds": {},
"overall": {
"ceph version 14.2.2 (a887fe9a5d3d97fe349065d3c1c9dbd7b8870855) nautilus (stable)": 62
}
}
 
A patch is available and an updated package should follow soon.
 
  • Like
Reactions: ddkargas
The patch is packaged in 'libpve-storage-perl: 6.0-8'.

i have the same problem. need reboot after update de package?
Only if the Ceph cluster is on version 14.2.2 and has all OSDs with the new on-disk fromat, then the new storage calculation will work correctly. If it isn't then the cluster needs to be updated and the OSDs need to be either a) recreated or b) repaired with the bluestore-tool.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!