Recover Data from Ceph OSDs

maximilianDorninger

New Member
Dec 16, 2023
3
0
1
Is there a way to recover data from OSDs if the monitors and managers don't work anymore but the OSDs still start up?

Just for clarification: I don't want to rebuild the cluster I just want to copy data from the OSDs to another HDD.
 
Is there a way to recover data from OSDs if the monitors and managers don't work anymore but the OSDs still start up?
The answer is: yes!
If you lose all monitors you can extract the map from the OSDs, merge it and re-inject it into a new mon.
Just for clarification: I don't want to rebuild the cluster I just want to copy data from the OSDs to another HDD.
CEPH distributes the data to all OSDs using the CRUSH algorithm, so the data is not entirely on one OSD. In order to extract the data, you cannot avoid completely rebuilding the cluster. To do this, you still need all OSDs to be intact.

If you have already deleted one or more, that will be it - then you will have to use your backups.
 
If you only have 3 nodes with one OSD per node, then every data is on every OSD, but in all other cases you're screwed.
 
@ness1602 Yes you are right. In practice, I don't know of any CEPH cluster that consists of only three OSDs, three nodes and three replicas - so I always forget that :D
 
Please replace $ID with the OSD number and /mnt/$DRIVE with a temporary path where you can save the map.

Please make sure that you don't destroy anything productive here, for example new keyrings will be created. If you have an existing CEPH, that could kill it. I would recommend you do it in another server where you can't destroy anything productive.

Otherwise, please note that the instructions may be incomplete or incorrect. Therefore, check the commands in detail and, if necessary, create a backup copy of the OSD beforehand so that you have several attempts to get to your data.

Extract necessary information from the OSD
Code:
ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-$ID --op update-mon-db --mon-store-path /mnt/$DRIVE

You should then find a few files in the folder /mnt/$DRIVE, something like kv_backend and store.db.

Create new keyrings
Code:
ceph-authtool /etc/ceph/ceph.client.admin.keyring --create-keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *'
ceph-authtool /etc/ceph/ceph.client.admin.keyring --gen-key -n mon. --cap mon 'allow *'

Backup the current monmap (if necessary)
Code:
mv /var/lib/ceph/mon/ceph-mon1/store.db /var/lib/ceph/mon/ceph-mon1/store.bak

Copy the extracted monmap back to the old location and assign permissions
Code:
mv /mnt/$DRIVE/store.db /var/lib/ceph/mon/ceph-mon1/store.db
chown -R ceph:ceph /var/lib/ceph/mon/ceph-mon1

In order for the individual mon to start, we have to manipulate the map a little
Code:
ceph-mon -i mon1 --extract-monmap /tmp/monmap

Take a quick look at the map
Code:
monmaptool /tmp/monmap  -print

If there is more than one entry then the others must be removed
Code:
monmaptool /tmp/monmap --rm ceph-mon2
monmaptool /tmp/monmap --rm ceph-mon3

Check again that they are really gone
Code:
monmaptool /tmp/monmap -print

Inject map again
Code:
ceph-mon -i mon1 --inject-monmap /tmp/monmap

Now restart all OSDs and integrate the CEPH again as normal as RBD and pull down the data.
 
Last edited:
Thanks!
But I have 1 problem:
When I try to add RBD Storage in the Datacenter tab I have to select a pool but I can't select one.
FYI: I haven't removed the one i try to recover.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!