Ceph OSD reuse after reinstall

Aug 1, 2017
60
0
6
39
Hi all,

I have a proxmox node that was a member of a ceph cluster, that had to be reinstalled from scratch.
I have reinstalled the node (same proxmox as the other nodes) and added the new node on the proxmox cluster. Everything is fine as far as proxmox is concerned.

The node was hosting 7 OSDs for my Ceph cluster.
I would like to recreate the same OSDs using the exactly the same disks as before. Is that possible or I have to zap them and create new OSDs?
I tried to do a "pveceph createosd /dev/sdd --journal_dev /dev/sdi" to recreate the OSD but the response was:
"device '/dev/sdc' is in use".

Is that possible?

Thanx,
sp
 
Aug 1, 2017
60
0
6
39
Ok, I found a way to do this, so here you are:

I remounted the metadata partitions at /var/lib/ceph/osd (exactly like they used to be). For example:
mount /dev/sdb1 /var/lib/ceph/osd/ceph-12
then
systemctl start ceph-osd@12
systemctl enable ceph-osd@12

Of course you need to do it for all OSDs.

then
pveceph createmgr
pveceph createmon
systemctl start ceph-mgr@px2
systemctl enable ceph-mgr@px2
systemctl start ceph-mon@px2
systemctl enable ceph-mon@px2

After that, all was fine. There were some issues with pveceph createmon, which failed because of the absence of the ceph-mgr (or that's what i think). You maybe have to destroy the mon before creating again, because all other nodes think it exists. You can do it from the GUI if you like.

Regards,
Sp
 
Last edited:

RobFantini

Well-Known Member
May 24, 2012
1,599
26
48
Boston,Mass
Hello

we have a 7 node ceph cluster. 5 nodes host osd.

we have been upgrading hardware lately [ motherboards and cpus ]. when we are to do that on a osd node - we shut it down,

we also shut down the node we will move the osds to.

move the disks to other node, start it up they all mount automatically.

we used to use pve gui > ceph > osd . and one osd at time click stop, out. move the osd then click in. moving all at once is 10x faster even with the shutdown of the target node.

sometime I'll try not shutting down the target node .

In your case setting noout on osd, then reinstall , then install the osd's should work.


PS:
I am not a ceph expert
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!