Hello Everyone,
I am in the process of building a home lab with three nodes, utilizing various storage devices (NVMe, SSD, and HDD). To organize these, I created three custom CRUSH buckets (`nvme-crush`, `ssd-crush`, and `hdd-crush`) to segregate the storage devices accordingly. However, I encountered an issue: while the Ceph cluster remains healthy when the OSDs are part of the default CRUSH hierarchy, it begins to show warnings about inactive placement groups (`pg`) once I move the devices into the custom buckets.
Below are the details of the nodes and their respective storage capacities:
Node1:
Does Proxmox support the use of custom CRUSH hierarchies or buckets? If so, could anyone provide guidance on how to configure this setup?
Thank you in advance for your assistance!
I am in the process of building a home lab with three nodes, utilizing various storage devices (NVMe, SSD, and HDD). To organize these, I created three custom CRUSH buckets (`nvme-crush`, `ssd-crush`, and `hdd-crush`) to segregate the storage devices accordingly. However, I encountered an issue: while the Ceph cluster remains healthy when the OSDs are part of the default CRUSH hierarchy, it begins to show warnings about inactive placement groups (`pg`) once I move the devices into the custom buckets.
Below are the details of the nodes and their respective storage capacities:
Node1:
- 1.8 TB - NVMe
- 3 x 2 TB - SSD
- 819 GB - NVMe
- 3 x 4 TB - HDD
- 819 GB
- nvme-crush: to include all NVMe devices
- ssd-crush: to include all SSD devices
- hdd-crush: to include all HDD devices
Does Proxmox support the use of custom CRUSH hierarchies or buckets? If so, could anyone provide guidance on how to configure this setup?
Thank you in advance for your assistance!