I'm playing with a cluster composed by 3 machines with 3 hdd osd per machine (9 osd in total). It is a test environment to learn doing stuff, but I don't want to destroy it with this "expansion". I have a VM with an application that is not tolerating HDD slow performance, and since I have 3 ssd, I want to add another pool composed only by ssd and migrate the VM disk on this new pool (1 ssd per machine)
I found something relative the task on the doc (here and here on ceph doc). I think also this post is relevant, and I'm trying to derive the strategy to do it without destorying the existing pool if possible.
As for now I have 4 pools:
My strategy is the following:
Do you see blocking?
Sorry for this bunch of confused ideas...
I found something relative the task on the doc (here and here on ceph doc). I think also this post is relevant, and I'm trying to derive the strategy to do it without destorying the existing pool if possible.
As for now I have 4 pools:
- .mgr
- ha-pool (the one I'm using for VM disk)
- cephfs_data
- cephfs_metadata
replicated_rule.My strategy is the following:
- Force the existing
replicated_ruleto accept only devices of hdd class. I have two way and I don't know which is better:- creating a new rule (
replicated_rule_hdd) with the command:ceph osd crush rule create-replicated replicated_rule_hdd default host hdd modifying the existing rule, but I have yet to understand if that is even possible...I think proxmox docs is telling me to do 1.
- creating a new rule (
- If created new rule, edit in the interface the ha-pool Crush Rule (Advanced option in the "Edit: Ceph Pool" window) to select the new one. I'll do the same with also ceph_data.
- I'm strugling in understand what is the effect of point 2. Will it destroy something?
- I will start to add the ssd osd, one per machine, created with "Create: Ceph OSD" window, setting the (Advanced option) Device Class to ssd
- I will create a new rule
replicated_rule_ssdwith the commandceph osd crush rule create-replicated replicated_rule_ssd default host ssd - Create a new pool ha-fast-pool added as storage with CRUSH rule replicated rule
- Optionally i could set the .mgr and cephfs-metadata for the CRUSH rule for ssd
- finally move the VM disk to the new ha-fast-pool storage
Do you see blocking?
Sorry for this bunch of confused ideas...