How to restore previous OSD of ceph?

lexxai

Member
Aug 28, 2020
4
0
21
48
How to restore previous OSD of ceph?
To study ceph I have a cluster of 3 servers.
Having created ceph, ceph osd, cephfs everything is fine.
Then I simulate the situation of restoring proxmox ceph through "pvceph purge".
Reinstall ceph, monitor. But how do I see the previous OSD? After all, they exist physically lvm knows about them.
Tried: ceph-volume lvm activate --all, does not help.
Code:
#ceph-volume lvm activate --all
--> Activating OSD ID 0 FSID ....
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-..../osd-block-.... --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Running command: /usr/bin/ln -snf /dev/ceph-..../osd-block-... /var/lib/ceph/osd/ceph-0/block
Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-....
Running command: /usr/bin/systemctl enable --runtime ceph-osd@0
Running command: /usr/bin/systemctl start ceph-osd@0
--> ceph-volume lvm activate successful for osd ID: 0
Try it:
Code:
#ceph-bluestore-tool repair --path /var/lib/ceph/osd/ceph-0
repair success
Code:
#ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --no-mon-config --op update-mon-db --mon-store-path /root
osd.0: 0 osdmaps trimmed, 0 osdmaps added.
But all this does not help.
OSD is empty:
Code:
#ceph -s
  cluster:
    id: .....
    health: HEALTH_WARN
            Reduced data availability: 1 pg inactive
            OSD count 0 <osd_pool_default_size 3
  services:
    mon: 2 daemons, quorum s231, s230 (age 59m)
    mgr: ns230 (active, since 70m), standbys: ns231
    mds: 1 up: standby
    osd: 0 osds: 0 up, 0 in
  data:
    pools: 1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage: 0 B used, 0 B / 0 B avail
    pgs: 100,000% pgs unknown
             1 unknown

prox-osd.PNG

Addon:
After run this :
Code:
#ceph auth add osd.0 osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd/ceph-0/keyring
added key for osd.1
#ceph-volume lvm activate --all
OSD now on cluster list, but down and out state.


Code:
#systemctl status ceph-osd@0
ceph-osd@0.service - Ceph object storage daemon osd.0
   Loaded: loaded (/lib/systemd/system/ceph-osd@.service; enabled-runtime; vendor preset: enabled)
  Drop-In: /usr/lib/systemd/system/ceph-osd@.service.d
           └─ceph-after-pve-cluster.conf
   Active: active (running) since Tue 2021-06-01 13:31:23 PDT; 55s ago
  Process: 273543 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id 0 (code=exited, status=0/SUCCESS)
 Main PID: 273553 (ceph-osd)
    Tasks: 58
   Memory: 33.5M
   CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@0.service
           └─273553 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph

Jun 01 13:31:23 ns231 systemd[1]: Starting Ceph object storage daemon osd.0...
Jun 01 13:31:23 ns231 systemd[1]: Started Ceph object storage daemon osd.0.
Jun 01 13:31:26 ns231 ceph-osd[273553]: 2021-06-01T13:31:26.110-0700 7f763c2ede00 -1 osd.0 525 log_to_monitors {default=true}
Jun 01 13:31:26 ns231 ceph-osd[273553]: 2021-06-01T13:31:26.886-0700 7f7634a1c700 -1 osd.0 525 set_numa_affinity unable to identify public interface 'vmbr0' numa node: (2) No such file or directory
 
Last edited:
Enters the following two commands at the command line:
Code:
ceph osd getcrushmap -o crushmapdump
crushtool -d crushmapdump -o crushmapdump-decompiled
and post the contents of the file "crushmapdump-decompiled".
 
Just curious if this has ever been successfully completed/rebuild, and if so, where there commands missing from your steps?
 
  • Like
Reactions: GoZippy