Ceph OSD reuse after reinstall

Discussion in 'Proxmox VE: Installation and configuration' started by Spiros Papageorgiou, Oct 26, 2018.

Tags:
  1. Spiros Papageorgiou

    Joined:
    Aug 1, 2017
    Messages:
    57
    Likes Received:
    0
    Hi all,

    I have a proxmox node that was a member of a ceph cluster, that had to be reinstalled from scratch.
    I have reinstalled the node (same proxmox as the other nodes) and added the new node on the proxmox cluster. Everything is fine as far as proxmox is concerned.

    The node was hosting 7 OSDs for my Ceph cluster.
    I would like to recreate the same OSDs using the exactly the same disks as before. Is that possible or I have to zap them and create new OSDs?
    I tried to do a "pveceph createosd /dev/sdd --journal_dev /dev/sdi" to recreate the OSD but the response was:
    "device '/dev/sdc' is in use".

    Is that possible?

    Thanx,
    sp
     
  2. Spiros Papageorgiou

    Joined:
    Aug 1, 2017
    Messages:
    57
    Likes Received:
    0
    Ok, I found a way to do this, so here you are:

    I remounted the metadata partitions at /var/lib/ceph/osd (exactly like they used to be). For example:
    mount /dev/sdb1 /var/lib/ceph/osd/ceph-12
    then
    systemctl start ceph-osd@12
    systemctl enable ceph-osd@12

    Of course you need to do it for all OSDs.

    then
    pveceph createmgr
    pveceph createmon
    systemctl start ceph-mgr@px2
    systemctl enable ceph-mgr@px2
    systemctl start ceph-mon@px2
    systemctl enable ceph-mon@px2

    After that, all was fine. There were some issues with pveceph createmon, which failed because of the absence of the ceph-mgr (or that's what i think). You maybe have to destroy the mon before creating again, because all other nodes think it exists. You can do it from the GUI if you like.

    Regards,
    Sp
     
    #2 Spiros Papageorgiou, Oct 26, 2018
    Last edited: Oct 26, 2018
  3. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,490
    Likes Received:
    21
    Hello

    we have a 7 node ceph cluster. 5 nodes host osd.

    we have been upgrading hardware lately [ motherboards and cpus ]. when we are to do that on a osd node - we shut it down,

    we also shut down the node we will move the osds to.

    move the disks to other node, start it up they all mount automatically.

    we used to use pve gui > ceph > osd . and one osd at time click stop, out. move the osd then click in. moving all at once is 10x faster even with the shutdown of the target node.

    sometime I'll try not shutting down the target node .

    In your case setting noout on osd, then reinstall , then install the osd's should work.


    PS:
    I am not a ceph expert
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice