Ceph OSD folder found empty

Sep 11, 2019
26
1
8
55
one of 4 nodes has lost the osd configuration
all nodes are running:
proxmox-ve: 6.0-2 (running kernel: 5.0.21-1-pve)
pve-manager: 6.0-6 (running version: 6.0-6/c71f879f)
ceph: 14.2.2-pve1
ceph-fuse: 14.2.2-pve1
corosync: 3.0.2-pve2

The osd gui screen shows 4 (of 16) osd drives as down and out. When looking the /var/lib/ceph/osd/ceph-0 folder for that node it is empty. (same with ceph-1,ceph-2,ceph-3). The crush map shows those devices. The 4 drives (sda,sdb,sdc,sdd) are still configured as Ceph Journal per fdisk. The usual ownership assignment to the raw disk is not ceph:ceph but is root:root

what is the best method to restore the osd configuration files and get the OSD back up UP and In.

Thank you
 
Thank you for the tip. I did miss a step to turn version from 1.0 to 2.0.
I also found that the "ceph-volume simple scan /dev/sdbx" command followed by "ceph-volume simple activate osdnumber uuid" can restore the osd volumes.

Thank you for the help

Bob