too few PGs per OSD (21 < min 30)

ChrisJM

Well-Known Member
Mar 12, 2018
54
1
48
38
Hello,

I have added an extra disk and added them as OSD i have a total of 3 disks per node with 3 nodes.

I am now getting the following error "too few PGs per OSD (21 < min 30)" in my ceph

Is there a way to resolve this?
 
Hi,

You can increase the number of placement groups (PG) per pool (should be possible to do online) - keep in mind that this results in a rebalance (the data will get shifted around and increase the load on the network).

Choosing the appropriate number of PGs per pool can be quite tricky and depends on a few factors - number of pools, whether all OSDs are equal (if you mix SSDs and HDDs in a cluster and distribute the pools per device class you need to calculate the PGs per device class)....

The Ceph-documentation provides quite a thorough discussion, and you can use the PG-calculator to get a good overview of a sensible target value:

http://docs.ceph.com/docs/luminous/rados/operations/placement-groups/
https://ceph.com/pgcalc/

Hope this helps
 
Thank you,

This is my current setup

3 OSD's per node and 3 nodes so 9 OSD's in total and all SSD and all 1TB each

wmY6rsI.jpg


So what would you recommend i should set it to?
 
Ok, just to confirm if i run this command

ceph osd pool create STORAGE 256

This will not break the current storage pool or stop any VM's from working?
 
This will not break the current storage pool or stop any VM's from working?
It depends on your current pools, use the CEPH Calculator and check if its enough. You can increase the PGs without any Problems, but not decrease. If you want to decrease the PGs you need to add an new Pool with the correct PGs and move all of your Data into the new Pool.
 
Hello,

Thanks for your help so far, i am still a little concerned because looking at the script that calculator generates it makes a new pool

ceph osd pool create STORAGE 256
ceph osd pool set STORAGE size 3
while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done

It doesnt adjust the current one. - would this not break it and i would lose our entire production of VM's
 
Thanks for your help so far, i am still a little concerned because looking at the script that calculator generates it makes a new pool
This script is meant for new installations (no pools exist).

Please check that you always use the docs for the installed version. ;)
I assume Ceph luminous (12.2.10).
http://docs.ceph.com/docs/luminous/...nt-groups/#set-the-number-of-placement-groups

and i also have to adjust the pgp_num as well.
Yes, both need to be set. An increase in PGs will split the current PGs to produce new ones. If you want to do an alternative, then you could create a new pool and migrate your VM/CT to the new one (one by one).
 
@Alwin
do you plan to use/impement pg_autoscaler?
or other way round (;-), why is it not used in the current version of PVE (no critisim, just want to know)?
 
Last edited:
It depends on your cluster setup, eg. resource consumption (data will be moved).
thanks for the link ... i "tried" it out and because of my "very small" system directly fall in same "warning" bug (ie. "overcommitted...") .. which will be fixed in 14.2.5 .... however .. because of my very small system and already fitting PGs "nothing really" happend
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!