Hello,
recently two disks on two different servers of a hyperconverged pve cluster died. ceph rebalanced and is healthy again. So I will get two new disks, insert them into the nodes and then.....?
At the moment both osds are marked
My plan would be to run
Afterwards a simple start of the osd(s) in question should make them
Is this workflow ok to solve the problem, or which way should I take instead to get the osds up and running again?
recently two disks on two different servers of a hyperconverged pve cluster died. ceph rebalanced and is healthy again. So I will get two new disks, insert them into the nodes and then.....?
At the moment both osds are marked
down
and out
in the output ceph osd tree
. Both are still part of the crush map.My plan would be to run
ceph-volume lvm create --bluestore --osd-id {original-id} --data /dev/sd
x for each of the new unused disks I see. Possible they would be marked in
afterwards. If this does not happen I could tell ceph to mark both in by running ceph osd in <osd_id>
. Afterwards a simple start of the osd(s) in question should make them
up
again and ceph should start moving data to the new disks.Is this workflow ok to solve the problem, or which way should I take instead to get the osds up and running again?