Hello,
I Cant get my head around this.
We forgot do remove a unattached disk of a vm, created a new one did de math we should have enough even whit the forgotten disk.
On the windows vm we needed to extract multiple archives so de new disk is growing fast.
Then suddenly i got a error on Ceph, OSD near full 95% almost direct after that pool full.
So i trayed to remove de forgotten unattached disk only it wont remove because of de near full OSD.
I did "ceph osd reweight-by-utilization" that helped to rebalcence de OSD`s and i was able to remove de unattached disk.
Then i was very happy the cluster got back in healty state.
Only the part i don`t understand is when looking at the metrics of the storage.
At the point the storage almost went out of diskspace the pool size went from 1 TB to 900GB
Then after de rebalance and the clean up the size of the pool went up from 900 GB to 1,1 TB
Is this normal ? I never see this before and it got me by surprise.
I Cant get my head around this.
We forgot do remove a unattached disk of a vm, created a new one did de math we should have enough even whit the forgotten disk.
On the windows vm we needed to extract multiple archives so de new disk is growing fast.
Then suddenly i got a error on Ceph, OSD near full 95% almost direct after that pool full.
So i trayed to remove de forgotten unattached disk only it wont remove because of de near full OSD.
I did "ceph osd reweight-by-utilization" that helped to rebalcence de OSD`s and i was able to remove de unattached disk.
Then i was very happy the cluster got back in healty state.
Only the part i don`t understand is when looking at the metrics of the storage.
At the point the storage almost went out of diskspace the pool size went from 1 TB to 900GB
Then after de rebalance and the clean up the size of the pool went up from 900 GB to 1,1 TB
Is this normal ? I never see this before and it got me by surprise.