@AlexLup You are a life-saver! "ceph osd reweight-by-utilization" did exactly what I needed.
Follow-up question: Does Ceph consider 85% of raw capacity to be Full? Seems like that's how it's calculating the available space for the pools (factoring in 3/2 replication as well).
I'm stumped, I have 34TB of raw storage left but the Ceph pools are reporting as full. They've hit some sort of arbitrary limit, but I can't see anything in the configuration of the pool that would lead to this state. I do have one OSD of the 21 that hit the full quota, would that cause this...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.