ceph autoscale pg

  1. Jackobli

    PVE-Ceph, Adding multiple disks to a existing pool

    Hi there We got a six server cluster with existing ceph pools. Now we need to add more disks to one pool and I am unsure what scenario needs more time and/or causes more «turbulances» . The pool consists of 6 x 2 SAS SSD (3.2 TB and 6.4 TB). We would add another 6 x 2 SAS SSD (6.4 TB)...
  2. G

    Ceph size looks like shrinking as filling up

    Hello, Today, we noticed our ceph pool looks like shining as its filling up, is this normal, visual bug, or we need to change something ? satred with size of 5TB, after puting 1.6TB data on it, looks like its reduced to 3.6TB root@pxcl-3:~# ceph status cluster: id: health...
  3. D

    Ceph OSD Full

    Hello, I am a beginner with Proxmox. My ceph storage is full, I have 3 nodes with one OSD each, and the smallest (80GB) is full (more than 90%), each OSD is created from lvm volume and there are hdds, I know that as long as I have a single OSD that is much fuller than the others, it will be the...
  4. C

    Ceph new rule for HDD storage.

    Hi guys, During the free time I had to think how to extend and add new resources (storage) to our Cloud. For the moment I do have stroage base on Ceph - OSD only SSD type. I was reading docs about Ceph and I can say it's possbile I have even the plan. Problem is I have no idea if the actions I...
  5. B

    Number of Optimal PG's higher than actual PG's

    Hello Community, I've the following Ceph question about PG's and OSD capacity: As you can see. The Optimal number of PG's for my main Pool (Ceph-SSD-Pool-0) is higher than the actual PG count of 193. Autoscale is not working as far as I can see then. There are no Target settings set yet...
  6. W

    Proxmox/Ceph: Adding server with much larger drive pool.

    Good day all, I have a 10 server Proxmox/Ceph cluster with 8 x 1.5T drives. I have 3 new servers with 24 x 3.5T drives. Is it safe/wise to add the new servers into the cluster? Or should they be placed in their own pool? If I add those drives I imagine I would have to modify the weight so...
  7. A

    pg_autoscale_mode after Luminous to Nautilus?

    Should the Ceph pg_autoscale_mode be turned on after upgrading from Luminous to Nautilus? There is nothing on this topic here: https://pve.proxmox.com/wiki/Ceph_Luminous_to_Nautilus but in the upgrade from Nautilus to Octopus it says: The PG autoscaler feature introduced in Nautilus is enabled...
  8. E

    Ceph went down after reinstall 1 OSD:

    Cluster Ceph 4 nodes, 24 OSD (mixed ssd and hdd), ceph Nautilus 14.2.1 (via proxmox 6, 7 nodes). Autoscale PG is ON, 5 pools, 1 big pool with all the VM's 512 PG (all ssd). This size did not change when i turned on Autoscale on SSD pool, only the smaller for HDD and test. All OSD installed in...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!