PGs Ceph

alanspa

Member
Apr 4, 2022
21
1
8
Hello

on a CEPH pool it is optimal to set the PGs to 512 (now set to 256). See screenshot.

The free space on that pool is about 40%.

If I set it to 512 is there a risk that the pool will fill up?
Is it recommended to do this during the weekend?

Thank you
 

Attachments

  • Screenshot 2024-02-02 090024.png
    Screenshot 2024-02-02 090024.png
    12.7 KB · Views: 13
Hello,

Changing the PG number will result in a considerable change in Ceph's topology. At the end of it you should end with roughly the same used raw space, but during the transition it slightly might go up. At 40% you shouldn't worry about filling it, but I would advice to do this at a moment where the Ceph network is under as little stress as possible.
 
thank you for the reply.
In the unlikely event that the pool fills up, what can happen? block all vms? How can I stop the process or do a rollback?

Although it is a remote hypothesis, I would not like to find myself unprepared if it were to occur.
 
I would not worry about filling the pool due to this operation. It simply will not happen if you are using ~40%.

Ceph will give you a warning (via email if you have this setup) at around 85%. At around 92% all IO in the pool will be blocked to ensure data redundancy and integrity.

The long term solution is to add more OSDs, but in an emergency you can temporally set the size/min_size of the pool to 2/2 which will free around 1/3 of your used space. This won't lower the data redundancy as you still keep two replicas of each object, but will reduce the operational redundancy, meaning that Ceph will count with fewer opportunities to self-heal in the event a OSD or node fails, if Ceph cannot self-heal IO will be blocked. This is a reasonable tradeoff in an emergency, the worst situation you can run into with Ceph is running out of space.
 
OK thank you.
The space now used is 60% and not 40% as you wrote.

Does anything change or do I have to worry about filling the pool?


As a possible solution you could do as you say.
I temporarily lower the redundancy to 2/2, and free up space. Once the process is completed I increase the PGs to 512 and once completed restore the redundancy to 3/2

What do you say?
 
My bad. But yes, it won't fill. And no, don't switch 2/2 unless there is an emergency, and under no circumstance set min_size to 1. Note that using more PGs will result in more even usage of the storage across OSDs, which is a good thing, but comes at the price of using more resources. In general just follow the recommendations made by Ceph's autoscaler.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!