autoscale

  1. S

    Ceph PG quantity - calculator vs autoscaler vs docs

    I'm a bit confused about the autoscaler and PGs. This cluster has Ceph 19.2.1, 18 OSDs, default 3/2 replicas and default target 100 PGs per OSD. BULK is false. Capacity is just under 18000G. A while back we set a target size of 1500G and we've been gradually approaching that, currently...
  2. B

    Using pg_autoscale in Ceph Nautilus on Proxmox

    Greetings everyone! I have installed Ceph Nautilus on Proxmox using the pveceph repositories. I then went on and installed the Nautilus Dashboard as well, as i intend to use the Ceph cluster for both Proxmox, and also make use of other Rados Gateways for general storage inside Ceph for a full...