Ceph SSD Cache for HDD Array. Which pool do I add to proxmox? SSD or HDD

apap

Member
Apr 18, 2021
24
2
8
49
I am a beginner to proxmox and ceph. Currently in the middle of testing for deployment to a production environment.

What I've done (hopefully to help others puzzling over the same issue as well as to verify that my work is correct):

1. Install PVE and Activate CEPH

2. Added 2 x SSD drives (osd.0 & 1) and 2 x HDD (osd.2 & 3)

3. Added replication rule (I only have 1 node so failure-domain is osd (raid on local host)):
a. ceph osd crush rule create-replicated ssd_localraid default osd ssd​
b. ceph osd crush rule create-replicated hdd_localraid default osd hdd​
format: ceph osd crush rule create-replicated <rule-name> <root> <failure-domain: "osd" is local raid, "host" is mirroring data across different hosts> <class of storage>

4. Created "ssd_cache" pool using "ssd_localraid" crush rule, and "hdd_store" pool using "hdd_localraid" crush rule. I did not automatically add these two pools as proxmox storage, I figure I will do this manually later.

5. Activated "ssd_cache" pool as cache for "hdd_store" pool by the following commands:
a. ceph osd tier add hdd_store ssd_cache​
b. ceph osd tier cache-mode ssd_cache writeback​
c. ceph osd tier set-overlay hdd_store ssd_cache​
6. Now I am stumped. For the next step, which is adding the pool as the VM storage in proxmox, which pool do I add? The "hdd_store" pool or the "ssd_cache" pool?

My confusion stems from the last step in 5c which in the cache tiering documentation states "The cache tiers overlay the backing storage tier, so they require one additional step: you must direct all client traffic from the storage pool to the cache pool." Common sense dictates that the hdd_pool should be added as storage but the fact that the ssd_cache is overlaying the hdd_pool introduces an uncertainty which pool should be added to proxmox.
 
If you have SSD and HDD OSDs, consider creating two pools with a matching storage configuration in Proxmox VE and treat them as the fast and slow pool and place VM disks as needed.

Thank you for the answer. However, your answer would apply if I do not use ceph ssd cache feature.

In ceph caching scenario, we have 2 pools that are related/overlayed, a ssd pool overlaying over the hdd pool.

The question is, in proxmox, which pool do I add as storage, the ssd or the hdd pool?
 
Last edited:
Not the answer you are looking for, but cache tiering is not recommened to use in most situations: https://docs.ceph.com/en/latest/rados/operations/cache-tiering/#a-word-of-caution

I've thought about what you said here. With another person's comment somewhere else, this warning finally sunk home. I will repeat my conclusion as it may help other beginners to PVE and CEPH.

With RBD storage, cache tiering is not recommended. Excerpt from comment by another regarding this: "Filesystems (RBD?) tend to do a lot of disk IO across the whole device in the background (i.e. ZFS scrubs, other filesystem cleanup) that would pull blocks into cache unnecessarily and copy on write filesystems also tend to pull the entire disk into cache as blocks are allocated and discarded when a file is modified, whereas with CephFS and RGW, Ceph is directly aware of the IO operation and knows which files should be kept in cache."

My next question is then this: In a budget constrained production environment (budget for HDD instead of SSD), given that cache tiering is not recommended, is the solution for best performance then to split the metadata, putting the DB/WAL on a few SSDs and keeping the data on the HDDs?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!