Ceph 19.2.0 does not distribute PG equally across OSDs

michel.seicon

Member
May 3, 2021
10
2
8
47
Hello everyone.

New installation, with all nodes with the same hardware configurations,
A simple VM with Debian and even so the distribution does not occur in the same way.
Is this correct?
 

Attachments

  • ceph.png
    ceph.png
    88.7 KB · Views: 19
I resolved the balancing by doing the following commands, I just didn't understand why Ceph messed up the pgs right at installation
ceph osd getmap -o om
osdmaptool om --upmap out.txt --upmap-pool ceph --upmap-max 4 --upmap-deviation 1 --upmap-active
source out.txt


Just be careful and
run the commands with the VMs turned off
 
This is not unusual in such a small cluster with such a low number of PGs.
The CRUSH algorithm just doe snot have enough pieces to distribute the data evenly.

You should increase the number of PGs so that you have at least 100 per OSD.
 
Hello,

For a Ceph cluster with a single pool with 12 OSDs, the Ceph PG calculator would recommend 512 PGs on the pool. As spirit said, the best would be to set the target ratio to a value so that the autoscaler can set the optimal number of PGs automatically.

[1] https://docs.ceph.com/en/latest/rados/operations/pgcalc/
 
  • Like
Reactions: michel.seicon
After change target ratio to 1
I set target_max_misplaced_ratio 0.01
PG go to 512
But not balancer
 

Attachments

  • ceph.png
    ceph.png
    85.9 KB · Views: 8