Hi,
we are about to install a three node ceph cluster with 5 disks each, which means 15 OSDs, Size: 3, Min.Size: 2.
The majority of space shall be used by an rbd, a small part is going to be exported as cephfs.
Now, it is up to me to decide about the pg_num of my pools.
The ceph documentation says (commonly used values):
"Between 10 and 50 OSDs set pg_num to 1024"
(from https://docs.ceph.com/docs/mimic/rados/operations/placement-groups/)
If I use PGCalc, it tells me to set 512 for rbd and 32 for the cephfs pool given a 95%/5% distribution of the data and 100 Target PGs per OSD.
Now, I am a bit lost - there are recommendations from 512 to 1024, which is a wide range. I do not have the experience to decide what I should use. As far as I understand, a higher pg_num can cause a higher load to my systems and I can increase pg_num later - but not decrease it.
My intuitution tells me to set it to 512 for the rbd for the moment.
Regarding the cephfs pool: 32 seems to be a rather low number. Should I increase it or will this work?
Dear ceph experts: what would you do if you were in my place?
Thanks in advance,
Markus
we are about to install a three node ceph cluster with 5 disks each, which means 15 OSDs, Size: 3, Min.Size: 2.
The majority of space shall be used by an rbd, a small part is going to be exported as cephfs.
Now, it is up to me to decide about the pg_num of my pools.
The ceph documentation says (commonly used values):
"Between 10 and 50 OSDs set pg_num to 1024"
(from https://docs.ceph.com/docs/mimic/rados/operations/placement-groups/)
If I use PGCalc, it tells me to set 512 for rbd and 32 for the cephfs pool given a 95%/5% distribution of the data and 100 Target PGs per OSD.
Now, I am a bit lost - there are recommendations from 512 to 1024, which is a wide range. I do not have the experience to decide what I should use. As far as I understand, a higher pg_num can cause a higher load to my systems and I can increase pg_num later - but not decrease it.
My intuitution tells me to set it to 512 for the rbd for the moment.
Regarding the cephfs pool: 32 seems to be a rather low number. Should I increase it or will this work?
Dear ceph experts: what would you do if you were in my place?
Thanks in advance,
Markus