Ceph, Pools, PGs, and OSDs - Should I Change These?

Seed

Renowned Member
Oct 18, 2019
109
64
68
125
Hello,

I am trying to understand how to optimize my ceph pools and understand how to associate pgs to pools correctly. I have the following.

12 OSDs in HDD pool in 3 hosts
9 OSDs in NVMe pool in 3 hosts
3 OSDs in SSD pool in 3 hosts

Each one is a 3/2 with a default of 128 PGs

if I used this calculator: https://ceph.io/pgcalc/

For HDD
Size of 3, OSD 12, %Data 100 Targets 100, It says I should be at 512.

For NVMe
Size of 3, OSD 9, %Data 100 Targets 100, It says I should be at 256

For SSD:
Size of 3, OSD 3, %Data 100, Targets 100, It says I should be at 128

Screen Shot 2019-11-11 at 2.01.56 PM.png

So the only one that is right is the SSD one. Should I change the other two? I thought nautilus sorta self tunes but I'm not so sure what to do here:
 
Ehhh screw it, I'm going in.

HDD to 512
NVMe to 256
SSD remains 128

Will increment by 128 to get the desired count and see what happens.


o_O