Ceph : number of placement groups for 5+ pools on 3hosts x 1osd

galeksandrp

New Member
Mar 15, 2024
3
0
1
Hi.

MY CONFIG : 3 hosts with PVE 8.4.1 and ceph reef, 10gb ethernet dedicated ceph network.

Each host have single osd which is 8tb hdd cmr drive.

WHAT I DID : Created 5 pools with defaul settings.

WHAT I NEED TO DO : Create 15 more pools.

PROBLEM : Ceph started screaming "too many pgs per osd".

WHY PROBLEM SURFACED : As far as I understood placement group is a thread which calculates destination of ceph object.

This calculation is done independently for each pool.

That means that 128 pg threads is adequate for single pool on 3osd.

But for 20 pools will be having 2560 placement groups per osd. Ceph will not be happy.

QUESTION: Can i supress this warning ?
At any time only single pool will have writes.
Does that mean that out of 2560 potential pg threads only 128 will be started ?
 
Last edited:
ADDITIONAL HARDWARE INFO : Each host have OSD with 1tb enterprise u.2 nvme as well.

On that ssd class OSDs reside single vm 300 gb database disk. This vm runs 24x7.

ADDITIONAL SOFTWARE INFO : Database is configured ring buffer-like, so only stores 1.5 months or so. This is hard limitation.

Each month script copies database files to destination backed by "month" pool.

TASK WHICH I TRY TO SOLVE : From time to time I need to provide fast random read access for x month ago database.

So if i have 20 pools this is simple - just change class of pool from HDD to Hdd+ssd and have rock solid rebalance.
 
Last edited:
Okay, this will be like :

STEP 1 : Migrate 202412.raw from HDD POOL to NVME POOL, don't delete source.
STEP 2 : Do database work.
STEP 3 : Delete source from HDD POOL.
STEP 4 : Migrate back 202412.raw from NVME POOL to HDD POOL, delete source.

Sound plausible, will try next week
 
Last edited: