wrong display on ceph space utilization

Ayush

Member
Oct 27, 2023
75
2
8
Hello Team,

I have 3 node cluster with Ceph as shared storage and I have only 2 pool in that node cluster. When I see things on GUI "Ceph" it show that Ceph storage is 8.96 TB while when I ran command for different pools I see that total ceph utilisation is about 4.3 TB.

So my question is why there is difference between GUI and CLI based ceph storage utilization.

rbd -p Prod-VM du
NAME PROVISIONED USED
vm-100-disk-0 50 GiB 23 GiB
vm-101-disk-0 200 GiB 42 GiB
vm-103-disk-0 200 GiB 107 GiB
vm-105-disk-0 200 GiB 194 GiB
vm-106-disk-0 50 GiB 50 GiB
vm-107-disk-0 150 GiB 46 GiB
vm-108-disk-0 200 GiB 199 GiB
vm-109-disk-0 260 GiB 216 GiB
vm-110-disk-0 300 GiB 300 GiB
vm-111-disk-0 200 GiB 197 GiB
vm-112-disk-0 250 GiB 27 GiB
vm-115-disk-0 282 GiB 277 GiB
<TOTAL> 2.3 TiB 1.6 TiB
ds 171:~# rbd -p Big-VM du
NAME PROVISIONED USED
vm-102-disk-0 1000 GiB 451 GiB
vm-113-disk-0 1 TiB 1022 GiB
<TOTAL> 2.0 TiB 1.4 TiB
 
My ceph version is :-0

ceph versions
{
"mon": {
"ceph version 17.2.7 (2dd3854d5b35a35486e86e2616727168e244f470) quincy (stable)": 3
},
"mgr": {
"ceph version 17.2.7 (2dd3854d5b35a35486e86e2616727168e244f470) quincy (stable)": 3
},
"osd": {
"ceph version 17.2.7 (2dd3854d5b35a35486e86e2616727168e244f470) quincy (stable)": 9
},
"overall": {
"ceph version 17.2.7 (2dd3854d5b35a35486e86e2616727168e244f470) quincy (stable)": 15
}
}
 
Hello Team,

Can you please clearify why there is difference between what is actually used vs GUI ceph usage.
 
Use this command ceph df to check cluster data usage and data distribution among pools then see this for info explained.

Also here is an excellent post in this forum giving a lot of insight/explanation.
 
@gfngfn256 ,

ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
ssd 27 TiB 18 TiB 8.9 TiB 8.9 TiB 32.78
TOTAL 27 TiB 18 TiB 8.9 TiB 8.9 TiB 32.78

--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
.mgr 1 1 60 MiB 16 179 MiB 0 4.8 TiB
Prod-VM 11 128 1.5 TiB 422.90k 4.6 TiB 24.12 4.8 TiB
Big-VM 14 32 1.4 TiB 376.79k 4.2 TiB 22.51 4.8 TiB

So my question is total utilization as below for all vm uses approx 4.3TB but ceph df show 8.9TB used , why there is difference in the number:-



ds 171:~#rbd -p Prod-VM du
NAME PROVISIONED USED
vm-100-disk-0 50 GiB 23 GiB
vm-101-disk-0 200 GiB 42 GiB
vm-103-disk-0 200 GiB 107 GiB
vm-105-disk-0 200 GiB 194 GiB
vm-106-disk-0 50 GiB 50 GiB
vm-107-disk-0 150 GiB 46 GiB
vm-108-disk-0 200 GiB 199 GiB
vm-109-disk-0 260 GiB 216 GiB
vm-110-disk-0 300 GiB 300 GiB
vm-111-disk-0 200 GiB 197 GiB
vm-112-disk-0 250 GiB 27 GiB
vm-115-disk-0 282 GiB 277 GiB

<TOTAL> 2.3 TiB 1.6 TiB


ds 171:~# rbd -p Big-VM du
NAME PROVISIONED USED
vm-102-disk-0 1000 GiB 451 GiB
vm-113-disk-0 1 TiB 1022 GiB

<TOTAL> 2.0 TiB 1.4 TiB
 
@gfngfn256 how can I co-relate following


Math

For replicated pools it works like this:
example 4/2 (size/min_size) --> each Gigabyte of actual data you put into the pool gets multiplied by "size" - so 4.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!