We have upgraded a cluster to PVE 5.1 with Ceph Luminous and have completed migrating all our OSDs to BlueStore. We will be adding dedicated SSD OSDs in the near future and would like to utilise the device class feature.
We currently have 3 pools defined:
Is there an easy way to update the existing pools, so that they don't start consuming SSDs?
Also, I understand that the cache tiering code is no longer actively maintained, or that RedHat have advised that they intend stopping active development of it. Should I be reading up about a possible replacement?
I was planning on adding 12 x SSDs (2 per host) and using this as both a cache tier and a dedicated SSD pool. Any suggestions or warnings? (We've selected Intel DC S4600 devices, rated at 3.2 full daily write endurance over 5 years and 65k random write IOPS performance).
Lastly, I assume renaming the existing 'rbd' pool to 'rbd-hdd' should just require us to rename the actual pool and subsequently update /etc/pve/storage.cfg?
We currently have 3 pools defined:
- rbd
- cephfs_data
- cephfs_metadata
Is there an easy way to update the existing pools, so that they don't start consuming SSDs?
Also, I understand that the cache tiering code is no longer actively maintained, or that RedHat have advised that they intend stopping active development of it. Should I be reading up about a possible replacement?
I was planning on adding 12 x SSDs (2 per host) and using this as both a cache tier and a dedicated SSD pool. Any suggestions or warnings? (We've selected Intel DC S4600 devices, rated at 3.2 full daily write endurance over 5 years and 65k random write IOPS performance).
Lastly, I assume renaming the existing 'rbd' pool to 'rbd-hdd' should just require us to rename the actual pool and subsequently update /etc/pve/storage.cfg?