Ceph Question: Replace OSDs

Jospeh Huber

Renowned Member
Apr 18, 2016
99
7
73
46
Hi,

we have a up and running three node ceph cluster - at the moment with one osd and one monitor per node.
It has one ceph pool which is replicated over the three nodes and a calculated pg_128 size and min 2/3.
We would like to add new faster disks on all nodes.
They are already installed on the three nodes. They are faster and bigger (SSDs vs HDDs, 900Gb vs 500Gb).
I have never done this that is why I ask ;-)

My rough idea is:
1. add the three new disks with the gui to each node
2. set the weight, that the data is moved to the new disks on each node
3. take out the old ones
4. adapt the pg_size according to the new sizes


Thx in advance.
 
Generally, yes, that is preferred for small clusters. But, to clarify your step 2, you adjust the CRUSH weight of each HDD you intend to remove one by one allowing the cluster to rebalance after each, e.g., ceph osd crush reweight osd.{osd-num} 0 and your step 4, PG (and PGP) count is determined based on number of OSDs -- not the size of the OSDs. Therefore, no change should be necessary after removing the old OSDs.
 
Thanks for your advice to be careful with step 2 after each.
Is it the same for step 1 (after each)? Because if there is more space on all nodes the cluster tries to rebalance for each node at the same time.
 
I think it depends. If you go host by host, you have inbalanced between hosts and the data migrates between.
If you add all 3 disks same time, then at least the host weight stays equally distributed.
 
Hi,

the migration has worked perfectly as decribed above :)
The SSDs were added as OSDs nearly at the same time, the cluster was balanced and then we reweighted the hdds to zero.
The local replication was fast...

Thx