ceph osd crush rule create-replicated replicated-ssd datacenter host ssd
ceph osd crush rule create-replicated replicated-hdd datacenter host hdd
ceph osd crush rule create-replicated replicated-ssd root-ssd datacenter
ceph osd crush rule create-replicated replicated-hdd root-hdd datacenter
ceph osd crush rule create-replicated replicated-ssd default datacenter ssd
ceph osd crush rule create-replicated replicated-hdd default datacenter hdd
To which article are you referring to?I followed the 1.3 model of this article for my OSD tree.
Is this still the case?Furthermore, when I reboot every PVE node, I noticed that the default OSD tree is created again and every OSD are placed there again.
This is known and worked on.When I set RBD storage, via PVE GUI, the whole storage space is not only the sum of my SSD or HDD space, but the sum of all OSD.
PVE can handle different pools with different rulesets. Here, most likely all OSDs have been used.I have also noticed that the performance became very bad when I have included the HDD OSD (write average of 80MB/s). It feels like Ceph integration within PVE is not capable of using different pools based on custom replication rules to use different hard drive (SSD and SAS).
ceph osd crush tree --show-shadow
See above.I deliberately did not set the SAS OSD on the new crush map tree due to performance issue.
My performance issue occurs on both crush map tree.
crushtool -i crush.map --test --show-X
To which article are you referring to?
I eventually have this issue solved. I chose to use class in order to separate hdd and ssd and it works fine.Is this still the case?
I understand that it is a known issue and PVE team is currently working to fix it, am I correct ?This is known and worked on.
Thought so.I am refering to this article : http://cephnotes.ksperis.com/blog/2015/02/02/crushmap-example-of-a-hierarchical-cluster-map
This is also the recommended way, as it involves less hassle on configuration.I eventually have this issue solved. I chose to use class in order to separate hdd and ssd and it works fine.
This issue probably occurs when I have the node name changed and I guess PVE could not match it with real node names.
Yep, exactly.I understand that it is a known issue and PVE team is currently working to fix it, am I correct ?
We use essential cookies to make this site work, and optional cookies to enhance your experience.