I found that yesterday a few minutes after posting my question...KNOWN BAD WORKLOADS
The following configurations are known to work poorly with cache tiering.
- RBD with replicated cache and erasure-coded base: This is a common request, but usually does not perform well. Even reasonably skewed workloads still send some small writes to cold objects, and because small writes are not yet supported by the erasure-coded pool, entire (usually 4 MB) objects must be migrated into the cache in order to satisfy a small (often 4 KB) write. Only a handful of users have successfully deployed this configuration, and it only works for them because their data is extremely cold (backups) and they are not in any way sensitive to performance.
- RBD with replicated cache and base: RBD with a replicated base tier does better than when the base is erasure coded, but it is still highly dependent on the amount of skew in the workload, and very difficult to validate. The user will need to have a good understanding of their workload and will need to tune the cache tiering parameters carefully.
Don't, create two pools one for the SSDs (enterprise class) and one for the HDDs.
http://docs.ceph.com/docs/luminous/rados/operations/cache-tiering/#known-bad-workloads
OSD IDs are reusable and not fixed. When you edit the CRUSH map then things like this can be possible. If you don't have a deep understand of what will be happening with your data, I advise against it. To give you an idea and older post, but still valid for the most part.Yeah, so you can make a pool wich targets a specific class (HDD, SSD, NVME) but you can't specifically target an OSD by his ID or his name in the CRUSH map wich makes sense since an average production cluster hosts way more than 10 OSD.
See above.Is it possible to select on which OSD your pool will be affected (in the case I stay with a hard drive only cluster) ?