Ceph, Pools, PGs, and OSDs - Should I Change These?


Active Member
Oct 18, 2019

I am trying to understand how to optimize my ceph pools and understand how to associate pgs to pools correctly. I have the following.

12 OSDs in HDD pool in 3 hosts
9 OSDs in NVMe pool in 3 hosts
3 OSDs in SSD pool in 3 hosts

Each one is a 3/2 with a default of 128 PGs

if I used this calculator: https://ceph.io/pgcalc/

Size of 3, OSD 12, %Data 100 Targets 100, It says I should be at 512.

For NVMe
Size of 3, OSD 9, %Data 100 Targets 100, It says I should be at 256

For SSD:
Size of 3, OSD 3, %Data 100, Targets 100, It says I should be at 128

Screen Shot 2019-11-11 at 2.01.56 PM.png

So the only one that is right is the SSD one. Should I change the other two? I thought nautilus sorta self tunes but I'm not so sure what to do here:
Ehhh screw it, I'm going in.

HDD to 512
NVMe to 256
SSD remains 128

Will increment by 128 to get the desired count and see what happens.



The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!