adding nodes and pg report question.

nttec

Well-Known Member
Jun 1, 2016
95
0
46
41
we are in the process of adding 3 more nodes, 2 for ceph and 1 compute. We are adding 2 new ceph nodes with a combine total of 10 new OSD. is it Ok to add all 10 at the same time, then increase the pg after adding all of them?


and if it is possible to do what if it reports too few pg during the process. Like we added 4 and it reported too few pg, what should we do?
 
IMO, I would add the proper PG count based on the expansion before adding the OSDs -- PGCalc. Wait for all PG to be active+clean, then add the OSDs one at a time (if in production) or all together (if you really want to see a backfill storm on your network). YMMV.

EDIT: I should also clarify that if in production, 'pg_num' and 'pgp_num' should be increased in small increments. For instance, if going from 2048 --> 4096, do so in increments of 64 until you reach the desired count. Again, if you prefer a storm in your cluster, you may go direct to the desired count.
 
nttec said:
and if it is possible to do what if it reports too few pg during the process. Like we added 4 and it reported too few pg, what should we do?

The solution is the same -- increase PG count (pg_num and pgp_num) to the appropriate number for the planed OSD count.
 
@RokaKen -
I don't know if ceph would even let me.

In the beginning when I set 1024 for 15 OSD, it wouldn't let me, with the error coming up saying too many PG. That is why it is currently set to 512. I don't think it will allow me to expand to 1024 before adding those OSDs.
This was with proxmox 5.2. We have since upgraded to 5.4 so maybe it's different now?
 
Yes, CEPH Luminous is always trying to prevent you from doing bad configuration, but in this case we understand that we must temporarily exceed some values while we expand the cluster. So, in my cluster I have:

Code:
# ceph --admin-daemon /var/run/ceph/ceph-mgr.*.asok config show | grep per_osd
    "mon_max_pg_per_osd": "250",
    "mon_pg_warn_min_per_osd": "30",
...
    "osd_max_pg_per_osd_hard_ratio": "3.000000",

Given the last parameter, osd_max_pg_per_osd_hard_ratio, is already 3.0, I won't increase that. However, the first parameter, mon_max_pg_per_osd, is 250. So, to allow me to add the target PG count before adding the new OSDs, I would increase that to 300-400.

In ceph.conf, [global] section:

Code:
mon max pg per osd = 300

You will need to restart the manager daemons to pick up the change.
 
@RokaKen - will you guide me on how to do this step by step? like what manager daemon to restart which to do first etc.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!