Hi,
I have a cluster of 5 PVE7 nodes with Ceph 16.2.5. The Hardware configuration of 4 of 5 nodes is:
In the cluster is actived the "Autoscale mode", but I would like to have an optimal number of PGs per pool before to migrate the 20 VMs, so he question is: do you think I have to increase the number of minimum PGs for my 2 Ceph pools? If yes, considering that 4 TB will be stored in Pools-NVME and 3 TB in Pool-SSD, which number of PG do you advice per pool?
Thank you
I have a cluster of 5 PVE7 nodes with Ceph 16.2.5. The Hardware configuration of 4 of 5 nodes is:
- CPU: n.2 EPYC ROME 7402
- RAM: 1 TB ECC
- 2 x SSD 960 GB ZFS Raid 1 for Proxmox
- 4 x Micron 9300 MAX 3.2 TB NVMe for Pool 1 named Pool-NVMe
- 2 x Micron 5300 PRO 3.8 TB SSD for Pool 2 named Pool-SSD
- NICs: 6 x 100Gb Mellanox Connect-X 5
- CPU: n.1 EPYC ROME 7302
- RAM: 256 GB
- 2 x SSD 240 GB ZFS Raid 1 for Proxmox
- 2 x Micron 5300 PRO 3.8 TB SSD for Pool 2 named Pool-SSD
- NICs: 6 x 100Gb Mellanox Connect-X 5
- Pool-NVMe: composed of NVMe disks (16 x 3.2 TB)
- Pool-SSD: composed of SSD disks (10 x 3.8 TB)
In the cluster is actived the "Autoscale mode", but I would like to have an optimal number of PGs per pool before to migrate the 20 VMs, so he question is: do you think I have to increase the number of minimum PGs for my 2 Ceph pools? If yes, considering that 4 TB will be stored in Pools-NVME and 3 TB in Pool-SSD, which number of PG do you advice per pool?
Thank you