how to achive ceph pool allocating osds to specific ceph pool

sandeep15

New Member
Apr 23, 2025
1
0
1
I have 20 OSDs (HDDs) in my Ceph cluster and want to allocate 15 OSDs to one pool and 5 OSDs to another pool without causing downtime, as this is a production environment. Can someone provide guidance on how to achieve this safely and efficiently, considering my limited experience with Ceph?
 
Hello, I would recommend testing this in a virtual pve enviroment first! At first, you need to assign a different device class to the 5 different osds. like hdd-2 for example. Then create two custom crush rules for the setup, as the default crush-rule takes all hosts/osds in consideration. After creating two new crush-rules assign them to the pool(s).

  • remove the old device-class from the 5 osds: ceph osd crush rm-device-class osd.x
  • set new device class to the 5 osds: ceph osd crush set-device-class hdd-2 osd.x
  • show current crush-rules: ceph osd crush rule ls
  • create a new crush-rule: ceph osd crush rule create-replicated replicated-hdd default host hdd
  • create the second crush-rule: ceph osd crush rule create-replicated replicated-hdd-2 default host hdd-2
  • recheck crush-rules: ceph osd crush rule list
  • set crush-rool to pool: ceph osd pool set POOL-NAME crush_rule replicated-hdd
  • set crush-rool to pool-2: ceph osd pool set POOL2-NAME crush_rule replicated-hdd-2
This is just writen from memory and not tested, but should work! just remember, if your changing crush-rules (setting them) data gets moved automatically according to the rule --> means data-movement (can lead to performance-decrease while data gets moved).