root@S071:~# ceph osd pool set ceph-vm pg_autoscale_mode off
set pool 4 pg_autoscale_mode to off
root@S071:~#
The message remains the same HEALTH_WARN.
oot@S071:~# ceph osd pool autoscale-status
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE
ceph-vm 5172G 3.0 8379G 1.8521 0.9000 1.0 256
I have adjusted the cluster:
ceph config set global mon_target_pg_per_osd 100
and
ceph osd pool set mypool target_size_ratio .9
But the messages still appear
What I do?
Should I disable the autoscaler?
Hi,
Errors no longer appear but health warning messages are still displayed
1 subtrees have overcommitted pool target_size_bytes
1 subtrees have overcommitted pool target_size_ratio
I need to do something else?
Is the configuration correct?
Thank you,
I have activated the autoscale
1) ceph osd pool set ceph-vm pg_autoscale_mode on
And set PG_Num and PGP_Num to 256 (The cluster will not grow in capacity)
2) ceph osd pool set ceph-vm pg_num 256
3) ceph osd pool set ceph-vm pgp_num 256
But CEPH HEALTH_WARN still appears:
1 subtrees have...
Hi alvin,
Thanks for answering so fast.
Initially we activate the autoscaling but then deactivate it.
Do you recommend activating it?
It's cluster consists of 3 hosts.
2 SSD (System) x host
3 SSD for CEPH x host
osd_pool_default_min_size = 2
osd_pool_default_size = 3
Pg_Num 512
1 single...
Hello,
We have configured a Proxmox VE cluster with Ceph shared storage.
Two alert messages have appeared in CEPH for two days.
1 subtrees have overcompromised pool target_size_bytes
Clusters ['ceph-vm'] compromise the storage available in 1,289x due to target_size_bytes 0 in the clusters []...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.