Confusing CEPH PG warrning

beriapl

Active Member
Jan 22, 2019
23
0
41
41
Hi,

I have setup with 3 physical hosts, each of them got 3 disks as OSD in CEPH cluster.

Using calculation:

Total PGs = (Total_number_of_OSD * 100) / max_replication_count

Where my max_replication_count = 3

I've set PG 128 for my POOL_CEPH

I've two pools:

Code:
rados lspools
device_health_metrics
POOL_CEPH

Max PG per OSD is set to 250.


Code:
ceph --admin-daemon /var/run/ceph/ceph-mon.pve11.asok config get  mon_max_pg_per_osd
{
    "mon_max_pg_per_osd": "250"
}

Pool device_health_metrics which is created by itself (during CEPH configuration) have 1 PG, while pool that I've created POOL_CPEH got 128 PG.

I'm getting warning 1 pools have too many placement groups :


Code:
ceph status
  cluster:
    id:     bf79845c-f78b-4b28-8bf9-85fb8d320a38
    health: HEALTH_WARN
            1 pools have too many placement groups

  services:
    mon: 3 daemons, quorum pve11,pve12,pve13 (age 12d)
    mgr: pve12(active, since 12d), standbys: pve11, pve13
    osd: 9 osds: 9 up (since 12d), 9 in (since 12d)

  data:
    pools:   2 pools, 129 pgs
    objects: 101.33k objects, 394 GiB
    usage:   1.2 TiB used, 6.7 TiB / 7.9 TiB avail
    pgs:     129 active+clean


What I'm missing to get rid of that warning? It doesn't exist with setup 64 PG for my pool.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!