Ceph PGs reported too high when they are exactly what is requested

Binary Bandit

Well-Known Member
Dec 13, 2018
60
9
48
53
Hi All,

I just patched our Proxmox 7 cluster to the latest version. After this "ceph health detail" reports:

HEALTH_WARN 2 pools have too many placement groups
[WRN] POOL_TOO_MANY_PGS: 2 pools have too many placement groups
Pool device_health_metrics has 1 placement groups, should have 1
Pool pool1 has 512 placement groups, should have 512

Has anyone seen this? Thoughts on how to resolve it?

best,

James
 
Morning from PST Shanreich,

Thank you for the response. We're running Ceph 16 / Pacific. I posted all of our versions below.

Looks like David / #7 on the bug URL (thank you for that) is reporting this issue with the exact version we are using.

I've spent several hours looking through forum posts and logs. It makes sense that this is a bug, but I don't see any indications of an issue. If need be, I'll turn the autoscaler from warn to off.

best,

James



proxmox-ve: 7.4-1 (running kernel: 5.15.143-1-pve)
pve-manager: 7.4-17 (running version: 7.4-17/513c62be)
pve-kernel-5.15: 7.4-11
pve-kernel-5.13: 7.1-9
pve-kernel-5.11: 7.0-10
pve-kernel-5.15.143-1-pve: 5.15.143-1
pve-kernel-5.15.131-2-pve: 5.15.131-3
pve-kernel-5.15.126-1-pve: 5.15.126-1
pve-kernel-5.15.116-1-pve: 5.15.116-1
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-1-pve: 5.11.22-2
ceph: 16.2.14-pve1
ceph-fuse: 16.2.14-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx4
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4.3
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-2
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.7
libpve-storage-perl: 7.4-3
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.6-1
proxmox-backup-file-restore: 2.4.6-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.2
proxmox-widget-toolkit: 3.7.3
pve-cluster: 7.3-3
pve-container: 4.4-6
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-4~bpo11+2
pve-firewall: 4.3-5
pve-firmware: 3.6-6
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-2
qemu-server: 7.4-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.14-pve1
 
Yes, it should only be a visual issue. We have already released 16.2.15 in the community repositories, although I wasn't able to say for sure whether the fix for this is included in that release. Since Ceph 16 is already EOL, it would make sense to think about upgrading to Ceph 17/18 soon.
 
Hello everyone,

This is just a bit of encouragement for first-time Ceph upgraders on PVE7. About a week ago, I upgraded our 3-node cluster per the official instructions here. It went smoothly with no issues. Just be sure to read everything carefully.

Oh, and the bug described here is, of course, no more.

best,

James
 
  • Like
Reactions: shanreich and vraa

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!