Strange Ceph Storage behavior

JeroenLWD

Renowned Member
Aug 24, 2016
7
0
66
41
Netherlands
Hello,

I Cant get my head around this.

We forgot do remove a unattached disk of a vm, created a new one did de math we should have enough even whit the forgotten disk.
On the windows vm we needed to extract multiple archives so de new disk is growing fast.

Then suddenly i got a error on Ceph, OSD near full 95% almost direct after that pool full.

So i trayed to remove de forgotten unattached disk only it wont remove because of de near full OSD.
I did "ceph osd reweight-by-utilization" that helped to rebalcence de OSD`s and i was able to remove de unattached disk.
Then i was very happy the cluster got back in healty state.

Only the part i don`t understand is when looking at the metrics of the storage.
At the point the storage almost went out of diskspace the pool size went from 1 TB to 900GB
Then after de rebalance and the clean up the size of the pool went up from 900 GB to 1,1 TB

Is this normal ? I never see this before and it got me by surprise.

Ceph-Upload.PNG
 
this can happen if the pgs on the osds are not evenly distributed. ceph stops accepting writes if an osd gets full, since it cannot place any more data on that osd (but might need to)
can you show us the osd overview? or the output of 'ceph osd df' ?
 
Hello DCsapak,

Thanks for your answer, a bit late reaction of mine....

The output of 'Cepch osd df'

IDCLASSWEIGHTREWEIGHTSIZERAW USEDATAOMAPMETAAVAIL%USEVARPGSSTATUS
0hdd0.300090.95001307 GiB164 GiB135 GiB3.1 MiB1021 MiB143 GiB53.361.1731up
1hdd0.30009100.000307 GiB103 GiB74 GiB2.9 MiB1021 MiB204 GiB33.510.7317up
2hdd0.30009100.000307 GiB143 GiB114 GiB1.9 MiB1022 MiB164 GiB46.491.0226up
3hdd0.30009100.000307 GiB137 GiB108 GiB21 MiB1003 MiB170 GiB44.570.9726up
4hdd0.30009100.000307 GiB156 GiB127 GiB2.2 MiB1022 MiB152 GiB50.631.1129up
5hdd0.30009100.000307 GiB155 GiB126 GiB3.6 MiB1020 MiB152 GiB50.561.1129up
6hdd0.30009100.000307 GiB125 GiB96 GiB4.6 MiB1019 MiB183 GiB40.530.8922up
7hdd0.30009100.000307 GiB142 GiB113 GiB1.7 MiB1022 MiB165 GiB46.301.0126up
8hdd0.300090.95001307 GiB164 GiB135 GiB24 MiB1000 MiB143 GiB53.411.1732up
9hdd0.30009100.000307 GiB116 GiB87 GiB4.6 MiB1019 MiB192 GiB37.690.8220up
10hdd0.30009100.000307 GiB108 GiB79 GiB23 MiB1001 MiB199 GiB35.110.7719up
11hdd0.30009100.000307 GiB120 GiB91 GiB3.3 MiB1021 MiB187 GiB39.100.8621up
12hdd0.300090.95001307 GiB196 GiB167 GiB3.6 MiB1020 MiB112 GiB63.631.3938up
13hdd0.30009100.000307 GiB146 GiB117 GiB2.6 MiB1021 MiB161 GiB47.541.0427up
14hdd0.30009100.000307 GiB133 GiB104 GiB1.8 MiB1022 MiB174 GiB43.260.9524up
TOTAL4.5 TiB2.1 TiB1.6 TiB104 MiB15 GiB2.4 TiB45.71
MIN/MAXVAR:0.73/1.39STDDEV:7.66
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!