Hello,
Thank you for asking this question.
I have the exact same need.
There are 12 SSD OSD and 4 HDD OSD within my architecture (PVE 5.1 with integrated Ceph Luminous).
I updated the CRUSH map adding datacenter levels and then I created two replication rules using these commands.
Code:
ceph osd crush rule create-replicated replicated-ssd datacenter host ssd
ceph osd crush rule create-replicated replicated-hdd datacenter host hdd
Afterwards I created two pools (rbd-ssd and rbd-hdd) with the aforementioned replication rules.
However, I encounter some issues :
I followed the 1.3 model of this article for my OSD tree. I have created different hostname compared to the default one, and noticed the GUI feature for OSD does not work anymore. I cannot tackle the split between SSD and HDD using different OSD tree than the default one. Furthermore, when I reboot every PVE node, I noticed that the default OSD tree is created again and every OSD are placed there again.
When I set RBD storage, via PVE GUI, the whole storage space is not only the sum of my SSD or HDD space, but the sum of all OSD.
I have also noticed that the performance became very bad when I have included the HDD OSD (write average of 80MB/s). It feels like Ceph integration within PVE is not capable of using different pools based on custom replication rules to use different hard drive (SSD and SAS).
Best regards,
Saiki