Ceph - 1 subtrees have committed pool target_size_bytes

Apr 29, 2022
24
1
3
Hello everyone
We have gotten this failure on our 3 nodes cluster and I really don't know how to solve it.
I would really appreciate the help.
Thanks in advance

1711530247531.png

1711529782191.png

1711530391300.png
 

Attachments

  • 1711529897903.png
    1711529897903.png
    109.4 KB · Views: 5
  • 1711529992941.png
    1711529992941.png
    37.2 KB · Views: 5
  • 1711530239079.png
    1711530239079.png
    58.2 KB · Views: 3
Last edited:
Check your total of the target_size_bytes, it could be bigger than the "Raw Capacity" available. If not then the BIAS can play a role as well, which acts as a multiplier for increasing the number of PGs. In short, the autoscaler assumes in the future you may be using more space then the raw capacity.

Have a look at the docs for the autscaler: https://docs.ceph.com/en/latest/rad...nt-groups/#viewing-pg-scaling-recommendations

I recommend to use the "Target Ratio" instead of the "Target Size" which will use a floating ratio relative to the pools on the same crush device class.
 
Check your total of the target_size_bytes, it could be bigger than the "Raw Capacity" available. If not then the BIAS can play a role as well, which acts as a multiplier for increasing the number of PGs. In short, the autoscaler assumes in the future you may be using more space then the raw capacity.

Have a look at the docs for the autscaler: https://docs.ceph.com/en/latest/rad...nt-groups/#viewing-pg-scaling-recommendations

I recommend to use the "Target Ratio" instead of the "Target Size" which will use a floating ratio relative to the pools on the same crush device class.
Thanks for the quick reply
The target_size_bytes isn't bigger than the "Raw Capacity"

We have 3 Nodes
15 OSDs in total (5 each) (more will be added in the future)
25 Storage Pools (more will be added)

what percentage of Target Ratio would you suggest?
 
Last edited:
The RATIO is calculated by (SIZE * RATE) / RAW CAPACITY, the EFFECTIVE RATIO is then the one which will be applied towards the cluster size. You can turn autoscaling to warn or disable autoscaling for the pool, and then fiddle with the ratios till you are satisfied.
Code:
ceph osd pool set <pool-name> pg_autoscale_mode <mode>
# or
ceph osd pool set <pool-name> noautoscale
The autoscaler is providing an automated way to increase or decrease the PGs for a pool.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!