CEPH: Increasing the PG (Placement Group) Count from 128 to 512

Zubin Singh Parihar

Well-Known Member
Nov 16, 2017
60
9
48
43
Hi folks,

Thanks for everybody's contributions on CEPH and Proxmox so far.

I'm looking for some instructions:
  • I'm Running Proxmox 8.1 with Ceph 17.2.7 (Quincy) on a 3 Node Cluster.
  • Each cluster has 6 x 1TB Samsung 870 EVOs
  • Each Proxmox Node has a 1TB Samsung 980 Pro 1TB NVME with 6 x 40GB WAl/DB lvm partitions
  • I have a CEPH Public Network (for VMs to use) and a CEPH Sync Network (data replication) bot running 10GBe NICs.

I ran through a simple setup guide which also instructed on how to create this setup, however, I don't remember where it mentioned configuring placement groups, and as a result I ended up with 128 PGs.

I'm filling up this server and its now giving me WARNINGS that I 128PGs is too low.

Since I've got 18 x 1TB SSDs (OSDs total) and I plan on adding 4 more 1 TB SSDs in each node in the future, bringing the total number of OSD's to 30. As a result i was thinking that 512 PGs would be appropriate.

Is it as simple as
Bash:
ceph osd pool set <pool_name> pg_num 512
ceph osd pool set <pool_name> pgp_num 512

Does anybody have 'Step-by-Step' instructions on how to do this?
  • Do I do this 1 node at a time and then wait?
    • If it is 1 one at a time, do I migrate all VM's off the node?
  • I've also read somewhere about enabling 'autoscaler'?
    • If so, how?
  • I also read somewhere else about enabling 'balancer'? It looks to already be on...

Your help and guidance would be greatly appreciated!
 
Last edited:
I just noticed this feature in Proxmox 8.1

Proxmox Node --> Ceph --> Pools
Select a Pool and click --> Edit

1704068150557.png

I'm guessing I could just do it from here on each node.
 
I'm guessing I could just do it from here on each node.
This is a pool wide setting. So doing it once is enough. Also, since this is the only pool (.mgr can be ignored), set the target_ratio. It is a weight and with only one pool having it set, it will always result in 100%. This tells the autoscaler that you expect this pool to take up all the available space in the cluster. Otherwise the autoscaler will take the current space usage of the pool to calculate the optimal PG_num, and that might change again in the future.
 
Hi @aaron
Thanks for your input. Had 3 question for you though:
1. What do you mean "(.mgr can be ignored)"
2. Shouldn't the "target_ratio" be "1.0" given that its exactly all the same hardware?
3. Because its all the same hardware, it looks like I don't need to adjust the 'Autoscaler', but I don't see an option here, and if i needed to, where would I adjust it?

Thanks in Advance!
 
1. What do you mean "(.mgr can be ignored)"
The .mgr pool is used for Ceph internally and will not use up a lot of space, so not need to include it in the autoscaler's decision process.

2. Shouldn't the "target_ratio" be "1.0" given that its exactly all the same hardware?
The autoscaler helps in deciding which pg_num a pool should get. In the end it depends on the number of OSDs you have in the cluster and the expected space usage of each individual pool.
The target_ratio is if you expect a pool to not grow beyond a certain size. I prefer the target_ratio though, as it is a weight. With only one pool to consider, any target_ratio will result in 100% for that pool.
If you have two pools and want to tell the autoscaler that both will approximately use 50%, then you could go ahead and assign the same target_ratio to both pools. From a human point of view, choosing values between 0.0 and 1.0 or 0 to 100 are a good idea, so that it is easier to think in percentages.

3. Because its all the same hardware, it looks like I don't need to adjust the 'Autoscaler', but I don't see an option here, and if i needed to, where would I adjust it?
I think there is a misunderstanding, this has nothing to do with the hardware being the same or not between the nodes. The autoscaler is really only there to keep an eye on the PGs configured for a pool and to react to it more or less automatically.

Edit: fixing sentence to make sense
 
Last edited:
  • Like
Reactions: Tmanok

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!