New pool in ceph - without touching the old one

Ozz

Member
Nov 29, 2017
13
0
6
48
Hi,
I have a 4 node proxmox cluster with ceph.
The version is 5.3. I have 1 pool with 16 OSDs - SSD.
Now I added several nodes to the cluster and I want the new OSDs be a part of a different pool.
I understand that this can only be done via device class settings:

ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
ceph osd pool set <pool-name> crush_rule <rule-name>

So I have several questions:
<class> should be ssd/hdd/etc, but both of my pools will be SSD. Just different models. What should I do here?
What is a crush root?
and the biggest question is this:
I don't want to touch the existing pool - it's a critical production.
The documentation states the following:
If the pool already contains objects, all of these have to be moved accordingly. Depending on your setup this may introduce a big performance hit on your cluster. As an alternative, you can create a new pool and move disks separately.
How can I do this?


Thanks
 
Excellent, thanks
What about the other questions? Especially the last one.
 
Not really sure what you're asking, but will walk down your list.

So I have several questions:
<class> should be ssd/hdd/etc, but both of my pools will be SSD. Just different models. What should I do here?

create new osds with device class SSD2. create new crush rule for using SSD2 osds. create new pool with new rule.
What is a crush root?

If crush is defined in a hierarchy of nested object classes, root is, well, root. see http://docs.ceph.com/docs/master/rados/operations/crush-map/
If the pool already contains objects, all of these have to be moved accordingly. Depending on your setup this may introduce a big performance hit on your cluster. As an alternative, you can create a new pool and move disks separately.
How can I do this?

Do you just want to migrate objects to the new pool? click move volume...

upload_2019-2-18_11-44-39.png
 
create new osds with device class SSD2. create new crush rule for using SSD2 osds. create new pool with new rule.
In this order? I would think that first I should create the rule, then pool and then osds, otherwise the osds get attached to the old pool.


Do you just want to migrate objects to the new pool? click move volume...
I just want to create the new pool without touching the old one, I mean I don't want to create any crush rules for the old pool and definitely don't want to create any movement or io load. Is this possible and how?

Thanks!
 
In this order? I would think that first I should create the rule, then pool and then osds, otherwise the osds get attached to the old pool.

The only variable part is when to create the OSDs; as long as they are created as a different object class then the disks you're using in your original pool they will not be used in your original pool.

I just want to create the new pool without touching the old one, I mean I don't want to create any crush rules for the old pool and definitely don't want to create any movement or io load. Is this possible and how?
as we discussed above :) as long as you're OSDs are created as a different device class they will not cause any IO on your original.
 
Great, thanks!
So let me just confirm one last time.
I can actually create only one crush rule (for ssd2, for example) and attach the new osds to it, and I don't have to create any rules for the old existing pool, right?
Sorry for asking the same thing over and over - I just don't want anything to happen to the production environment
 
Hi again,
so I tested this and this is what I got:
I created a new device class "ssd2", set up a rule to assign ssd2 to osds.
disabled the automatic assignment of the device class.
Then I created a new pool with this rule.

It was assigned OK, all the newly created osds were ssd2.
But then it started rebalancing and eventually I realized that while it did create a new pool with new osds -
it also assigned all these osds to the old pool.

I can see it via the "used space" and I also see it on the osd page - all the osds are about 20% used - from old pool and new pool.

So what am I doing wrong?

Thanks!
 
The default rule holds all OSDs. You need to create a new rule and assign your pools accordingly.
 
Understood.
Will I be able to change the default pool to work with the new rule or will I have to create a new pool and transfer all the osds over?

One more question :
On a summary view of a ceph pool I see that the TOTAL space sometimes grows a bit (from 3.44TB to 3.55TB for example ) - how is that possible?
Can I also assume that in the latest version we see (sum of all osds - 10%)/3 on the pool view? With replica 3 and a single pool of course.
 
Will I be able to change the default pool to work with the new rule or will I have to create a new pool and transfer all the osds over?
You can change which rule a pool is using. As @alexskysilk already said, please read the ceph docs.
https://forum.proxmox.com/threads/ceph-raw-usage-grows-by-itself.38395/#post-189842

On a summary view of a ceph pool I see that the TOTAL space sometimes grows a bit (from 3.44TB to 3.55TB for example ) - how is that possible?
Can I also assume that in the latest version we see (sum of all osds - 10%)/3 on the pool view? With replica 3 and a single pool of course.
Check out 'ceph df detail'. Pools show the available/used before replication, whereas global shows RAW usage/free on the cluster.
 
Hi!
Did you manage to complete this?
Got the same question - existing hdd pool and newly added ssd's for the new pool.
Did you bind a rule to existing pool?
Any performace issues after that?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!