Ceph Nodes & PG's

vispa

Well-Known Member
Feb 20, 2016
34
0
46
42
Having used proxmox for some time, I'm re-building one of my clusters and I would like to make sure I get the PG's correct as it still confuses me.

Question 1

In my situation I have five nodes. Each nodes has 5-6 500GB OSD's installed.

I used the pg calculator which give me PG value of 1024 however, in my case im consious that I have 5-6 OSD's per node.

Should I take this into account?

Question 2

I created a pool with the default 3/2 replication and a PG size of 1024.
I then created five MDS's and a cephfs with 128 PG's.

Is this correct as it complained when I tried to increase to 1024 PG?
 
All pools share the OSDs, they need to be added to the calculation, that's why there is the %Data field.
 
But does having six osd’s per node affect the PG’s as one node down will cause six OSD’s to go?

Is my calculator correct?

calc.png
 
Last edited:
But does having six osd’s per node affect the PG’s as one node down will cause six OSD’s to go?
A PG is connected to 3 OSDs, so while the host is down, the PG is degraded but not gone.

Is my calculator correct?
Not quite. CephFS uses two pools, one for metdata and the other for the data stored. Do you really intent to store 50% of all the data in the cluster on each of those pools? The %Data evens the amount of PGs to Objects for that pool.

As an example, if you use CephFS for temporary backups, template/iso storage and ceph3 for VM/CT. Then percentages could look like this. 5% for cephfs_metadata, 15% for cephfs_data and 80% for ceph3.

Don't worry if you have to few PGs, in Ceph Luminous you can always increase them (but never decrease). With Ceph Nautilus on the upcoming Proxmox VE 6, you can do it in both directions.
 
Greetings. I have a ceph cluster deployed on 3 nodes. Each node has 4 OSDs. There are 12 OSDs in total.
Initially, 32 PGs were created. Autoscale PG is off. Recently ceph started warning that the recommended amount of PG is being used. I increased the number of PGs through the interface to the recommended 64.

Знімок екрана 2021-12-15 214747.pngHowever, I was interested in calculating the number of PGs and it is not entirely clear how many should be set. Using a calculator and recommendations, I calculated using the formula: 12 OSD * 100/3 replicas = 400, rounded up to 512. However, I did not dare to set such a value as the recommendations differ. For example, the calculator on the website https://old.ceph.com/pgcalc/ gives 256. Now I set 128. So what is the correct value? Also in the recommendations it is written that the number of PGPs should be equal to PG. However, with increasing the number of PGs from 64 to 128 from the interface, the number of PGPs increased only to 77. I saw this using the commands:

ceph osd pool get ceph pgp_num ceph osd pool get ceph pg_num
Is it correct to manually increase PGP to 128 with the command:

ceph osd pool set ceph pgp_num 128

I also have one more pool,

device_health_metrics

What is its purpose?

UPD: Number of PGP change automatically to 128 after rebalancing
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!