Hey,
i don´t think it´s somewhere "documented". It´s more in the nature ceph allocates space in objects in placement groups. You can have a full cluster with lot of osds only at 80% and only one or a few osd at 95% or so. Then your cluster is observed as "full". Free = 0.
And if you rebalance...
I think the "total space" is a sum up from ceph available space. That is - as i understand - not totally fixed, because of the fill-grade of each single osd and the possibility to place new data regarding the crush algo.
Hey,
maybe you missed the "wiping your disk" part from Dominics answer. I never had problems with seeing disks.
Wiping the disk (search for zap) depends on your actual versions, if you already use ceph-volume or the "old" ceph disk tools. Be careful not to use wrong device strings.
Hey,
hatte das schon mal in Verbindung mit ZFS und swap. Ursache für hängendes df -h / qm list etc. ist bei uns immer ein Process im D state gewesen.
Manchmal hilft da ein einfaches
"systemctl restart pve-cluster".
Ob das in diesem Fall a) gefahrlos möglich ist und b) auch hilft, kann ich...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.