too many PGs per OSD (256 > max 200)

silvered.dragon

Renowned Member
Nov 4, 2015
125
5
83
I don't know exactly when it happens but I suppose in the last 2 weeks updates, in ceph I have this warning message:
too many PGs per OSD (256 > max 200)

so I searched and found that is a problem regarding the lasts ceph's updates and the wrong pgs number like in this post https://forum.proxmox.com/threads/a...i-get-too-many-pgs-per-osd-256-max-200.38636/

so this is not a critical problem but I really want to fix this! I have a 3 node cluster with 35VMs and CTs all in a production environment, so is really very difficoult for me to backup, destroy the pool, create a new one and then restore all the VMs. Question is, can I create a new pool with the correct PGs number, move one by one the VMs storages from "VMs name-->hardware-->hard disk-->move disk-->new VMs pool" and then destroy the old one? and what about the CTs? there is no option for moving the disks. I have to remove HA before doing this?
Hope someone can give me some suggestions. many many thanks
 
Search the forum, there are similar posts (search for "HEALTH_WARN too many PGs per OSD")
 
yes I have already searched and seems that solution is to backup everything and destroy the pool. but this is a problem cause this is a production environment and will require a lot of downtime. is the solution that I reported in my first post possible?
 
Can someone please tell me which is the command for increasing
mon_max_pg_per_osd from 200 to 300? I will add more osd in future to totaly fix this but for now I have good processors and a lot of memory in my servers and pgs are around 215 on each osd so I simply want to remove this message.
thanks
 
in ceph.conf you can set this in the [global] section to change the warning threshold

mon pg warn max per osd = 300
 
ok is
mon_max_pg_per_osd = 300
or
mon pg warn max per osd = 300
??
and witch service I have to restart to apply this?
many thanks
 
the correct procedure is
nano /etc/ceph/ceph.conf
add in [global]
mon_max_pg_per_osd = 300 (this is from ceph 12.2.2 in ceph 12.2.1 use mon_pg_warn_max_per_osd = 300)
restart the first node ( I tried restarting the mons but this doesn't apply the config).
Be aware that 300 is still a reasonable value, do not exceeed this value or add pools if you are over 200 per osd.
bye