CEPH - why the total amount increase, when deleting VMs

grefabu

Well-Known Member
May 23, 2018
240
14
58
50
Hi,

I delete some VM and theire image from our ceph pool ,I see in the summary the used space fall but the total rise?
1617192259754.png
The inviroment:

three nodes with 5 OSDs, one OSD has 1,75 TB space.
One productiv ceph pool over all OSDs: Size 3/2

So the over all amount off space is round about 26TB, the pool should 've around 8TB (a liitle bit less, cause I've an unproductive 2. pool, but with nearly no data on it)

Why the total amount rise? I didn't understand.
And I want to move an 2TB vm to this pool, is this possible?
 
There are a few things that can cause this behavior. The available space calculation is a bit more complicated than on a simple file system.

IIRC, Ceph also takes into account if there are OSDs that are about to become full in the space calculation.

You might have a bit of an imbalance on how much data is stored on each OSD. Could you please post the output of ceph osd df in [code][/code] tags?
 
Hi,

thank you for reply, yes ceph is a littly bit complictae than a usal fs.

ceph osd df:
Code:
root@ph-pve005:~# ceph osd df 
ID CLASS WEIGHT  REWEIGHT SIZE    RAW USE DATA    OMAP    META    AVAIL   %USE  VAR  PGS STATUS 
 0   ssd 1.74609  1.00000 1.7 TiB 552 GiB 550 GiB 3.5 MiB 2.0 GiB 1.2 TiB 30.85 1.33  90     up 
 1   ssd 1.74609  1.00000 1.7 TiB 384 GiB 382 GiB 7.6 MiB 1.4 GiB 1.4 TiB 21.45 0.93  75     up 
 2   ssd 1.74609  1.00000 1.7 TiB 419 GiB 417 GiB 8.8 MiB 1.5 GiB 1.3 TiB 23.43 1.01  52     up 
 3   ssd 1.74609  1.00000 1.7 TiB 390 GiB 388 GiB 7.5 MiB 1.4 GiB 1.4 TiB 21.79 0.94  73     up 
14   ssd 1.74609  1.00000 1.7 TiB 323 GiB 321 GiB 6.4 MiB 1.5 GiB 1.4 TiB 18.05 0.78  74     up 
 5   ssd 1.74609  1.00000 1.7 TiB 437 GiB 435 GiB 8.1 MiB 1.5 GiB 1.3 TiB 24.42 1.06  69     up 
 6   ssd 1.74609  1.00000 1.7 TiB 419 GiB 417 GiB 1.9 MiB 1.5 GiB 1.3 TiB 23.41 1.01  74     up 
 7   ssd 1.74609  1.00000 1.7 TiB 422 GiB 420 GiB 2.6 MiB 1.5 GiB 1.3 TiB 23.58 1.02  85     up 
11   ssd 1.74609  1.00000 1.7 TiB 437 GiB 436 GiB 8.7 MiB 1.5 GiB 1.3 TiB 24.45 1.06  80     up 
12   ssd 1.74609  1.00000 1.7 TiB 355 GiB 354 GiB 2.2 MiB 1.2 GiB 1.4 TiB 19.86 0.86  72     up 
 4   ssd 1.74609  1.00000 1.7 TiB 582 GiB 580 GiB 5.8 MiB 2.0 GiB 1.2 TiB 32.54 1.41  87     up 
 8   ssd 1.74609  1.00000 1.7 TiB 467 GiB 465 GiB 9.2 MiB 1.5 GiB 1.3 TiB 26.10 1.13  72     up 
 9   ssd 1.74609  1.00000 1.7 TiB 341 GiB 338 GiB 1.5 MiB 2.2 GiB 1.4 TiB 19.05 0.82  68     up 
10   ssd 1.74609  1.00000 1.7 TiB 262 GiB 261 GiB 4.6 MiB 1.6 GiB 1.5 TiB 14.67 0.63  67     up 
13   ssd 1.74609  1.00000 1.7 TiB 419 GiB 417 GiB 8.7 MiB 1.6 GiB 1.3 TiB 23.41 1.01  82     up 
                    TOTAL  26 TiB 6.1 TiB 6.0 TiB  87 MiB  24 GiB  20 TiB 23.14                 
MIN/MAX VAR: 0.63/1.41  STDDEV: 4.40
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!