ceph SSD and HDD pools

flexyz

Well-Known Member
Sep 22, 2016
154
10
58
55
Hi

Is there a guide on how to make two pools (fast and slow) on Proxmox with CEPH, I guess it all has to be done from the cli.

Thanks
Felix
 
I have a similar configuration, a pool made by HDDs and a pool made by SSDs.
If the SSDs are correctly recognized, you should see the right device class for the OSDs:
Code:
$ ceph osd tree
ID CLASS WEIGHT  TYPE NAME            STATUS REWEIGHT PRI-AFF
-1       8.93686 root default
-6       2.94696     host pve-hs-2
 3   hdd 0.90959         osd.3            up  1.00000 1.00000
 4   hdd 0.90959         osd.4            up  1.00000 1.00000
 5   hdd 0.90959         osd.5            up  1.00000 1.00000
10   ssd 0.21819         osd.10           up  1.00000 1.00000
-3       2.86716     host pve-hs-3
 6   hdd 0.85599         osd.6            up  1.00000 1.00000
 7   hdd 0.85599         osd.7            up  1.00000 1.00000
 8   hdd 0.93700         osd.8            up  1.00000 1.00000
11   ssd 0.21819         osd.11           up  1.00000 1.00000
-7       3.12274     host pve-hs-main
 0   hdd 0.96819         osd.0            up  1.00000 1.00000
 1   hdd 0.96819         osd.1            up  1.00000 1.00000
 2   hdd 0.96819         osd.2            up  1.00000 1.00000
 9   ssd 0.21819         osd.9            up  1.00000 1.00000

In this case you must have two rules in crush map, one targeting only the HDDs and one targeting the SSDs, as I have in my crush map:
Code:
# rules
rule replicated_hdd {
    id 1
    type replicated
    min_size 1
    max_size 10
    step take default class hdd
    step chooseleaf firstn 0 type host
    step emit
}
rule replicated_ssd {
    id 2
    type replicated
    min_size 1
    max_size 10
    step take default class ssd
    step chooseleaf firstn 0 type host
    step emit
}
You can create a rule with ceph osd crush rule create-replicated

When you have the correct rules in crush map, you just create the pools with the right rule, in my case I have some pools with rule 1 (replicated_hdd) and the pool cephssd with rule 2 (replicated_ssd)
Code:
$ ceph osd pool ls detail
pool 13 'cephwin' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 128 pgp_num 128 last_change 16454 flags hashpspool stripe_width 0 application rbd
        removed_snaps [1~5]
pool 14 'cephnix' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 128 pgp_num 128 last_change 16560 flags hashpspool stripe_width 0 application rbd
        removed_snaps [1~247]
pool 17 'cephssd' replicated size 3 min_size 2 crush_rule 2 object_hash rjenkins pg_num 64 pgp_num 64 last_change 8601 flags hashpspool stripe_width 0 application rbd
        removed_snaps [1~3]

Once you have created the pools, you simply add them from the Datacenter - Storage GUI in Proxmox:
Add - RBD (PVE) if you have ceph in same host of proxmox
 
Please tell me, can I change the rule on an already working pool without data loss?

Default rule is used for the pool. All storage media are SSD type. I need to create a dedicated pool for cold data. HDDs is added for this purpose and rules is created. Unfortunately I have VMs that cannot be stopped. I have not found information anywhere on how a rule change can affect the data in the pool.
rule replicated_rule { id 0 type replicated min_size 1 max_size 10 step take default step chooseleaf firstn 0 type host step emit } rule replicated_ssd { id 1 type replicated min_size 1 max_size 10 step take default class ssd step chooseleaf firstn 0 type host step emit

The difference between the old and the new rule is in specifying the device class
 
  • Like
Reactions: erazmus
Please tell me, can I change the rule on an already working pool without data loss?
Bump. Any thoughts on this? I've found myself in this position. I have an existing pool with 52 HDD OSDs. I would like to add a second pool of SSD OSDs, but I don't want to take down the HDD pool. Am I able to edit the existing pool to be just HDD *before* I introduce SSD devices? Then create the second pool of SSD?