[SOLVED] ceph - add a new osd pool group?

RobFantini

Famous Member
May 24, 2012
2,080
116
133
Boston,Mass
Hello
we have a ceph cluster of 24 OSD's .

We are purchasing 30 SSD's [ which are 3x better then the current 24].

My question is this -

can I create a new osd pool group that somehow just uses the new ssd's ? then I'd move the vm's to the new storage. when done remove the original 24 ssd pools.

Or is there a better way to change over to the new ssd's ?
 
I have another option - excuse the rambling....

i could just add these 30 ssd's to the ceph cluster.. the chassis can handle that. [ 24 x 2.5" drive bays ] i suppose that would make a more durable ceph set up?

the thing is the 24 current drives are 480GB, the new ones are 400GB. I think there is no issue with that as long as each node has the same quantity of 400 and 480 sized ssd?
 
so i decided to just add drives and not replace the existing ones.

ideally we'd use all the same size drives.

however I assume that making sure each node has the same mix of drives [ 5x480gb and 6x400gb ] is OK.

the existing drives are model s3520 480GB . 30 s3610 400GB will be added.

we do not need the extra capacity - my assumption is that more good drives makes for a faster and more stable ceph storage system. If I am wrong or you have a suggestion/comment do reply!
 
we do not need the extra capacity - my assumption is that more good drives makes for a faster and more stable ceph storage system. If I am wrong or you have a suggestion/comment do reply!
No that is correct. More OSDs = faster ceph. I'd say that this is in most cases the right decision.