Search results

  1. M

    Alert messages in CEPH

    root@S071:~# ceph osd pool set ceph-vm pg_autoscale_mode off set pool 4 pg_autoscale_mode to off root@S071:~# The message remains the same HEALTH_WARN.
  2. M

    Alert messages in CEPH

    POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE ceph-vm 5173G 3.0 8379G 1.8523 1.0 256
  3. M

    Alert messages in CEPH

    Hi Alwin, both messages still appear after executing: ceph osd pool set ceph-vm target size ratio 0 Do i have to do something else?
  4. M

    Alert messages in CEPH

    oot@S071:~# ceph osd pool autoscale-status POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE ceph-vm 5172G 3.0 8379G 1.8521 0.9000 1.0 256
  5. M

    Alert messages in CEPH

    After applying the command: ceph osd group set ceph-vm target_size_bytes 0 The warnings continue to appear
  6. M

    Alert messages in CEPH

    I do not understand what you mean. Can you specify me please? What command / order do I have to set to 0 ? Thank you and excuse my ignorance
  7. M

    Alert messages in CEPH

    Can someone help me please?
  8. M

    Alert messages in CEPH

    I have adjusted the cluster: ceph config set global mon_target_pg_per_osd 100 and ceph osd pool set mypool target_size_ratio .9 But the messages still appear What I do? Should I disable the autoscaler?
  9. M

    Alert messages in CEPH

    Hi, Errors no longer appear but health warning messages are still displayed 1 subtrees have overcommitted pool target_size_bytes 1 subtrees have overcommitted pool target_size_ratio I need to do something else? Is the configuration correct? Thank you,
  10. M

    Alert messages in CEPH

    I have activated the autoscale 1) ceph osd pool set ceph-vm pg_autoscale_mode on And set PG_Num and PGP_Num to 256 (The cluster will not grow in capacity) 2) ceph osd pool set ceph-vm pg_num 256 3) ceph osd pool set ceph-vm pgp_num 256 But CEPH HEALTH_WARN still appears: 1 subtrees have...
  11. M

    Alert messages in CEPH

    So can I go down to 256 Pg_Num? Active autoscale =? Sorry but I'm a rookie at Ceph
  12. M

    Alert messages in CEPH

    Hi alvin, Thanks for answering so fast. Initially we activate the autoscaling but then deactivate it. Do you recommend activating it? It's cluster consists of 3 hosts. 2 SSD (System) x host 3 SSD for CEPH x host osd_pool_default_min_size = 2 osd_pool_default_size = 3 Pg_Num 512 1 single...
  13. M

    Alert messages in CEPH

    Hello, We have configured a Proxmox VE cluster with Ceph shared storage. Two alert messages have appeared in CEPH for two days. 1 subtrees have overcompromised pool target_size_bytes Clusters ['ceph-vm'] compromise the storage available in 1,289x due to target_size_bytes 0 in the clusters []...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!