Recover ceph storage pool after pveceph purge?

obsolete

New Member
Mar 23, 2021
21
2
3
54
Is it possible to recover a ceph storage pool after executing pveceph purge?

I can see various artifacts of my storage pool... I'm just unsure of how to re-map it back in to my ceph configuration.
 
For what it's worth, I know the OSD is still there... I just don't know how to get it back online.

Bash:
fdisk -l

Disk /dev/nvme0n1: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Disk model: CT500P5SSD8                             
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

...

Disk /dev/mapper/ceph--05a4b3dd--0c87--49a0--aa75--d567bbe69f96-osd--block--13bcca31--06ca--47bd--b7fc--093cf0432bf6: 465.8 GiB, 500103643136 bytes, 976764928 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Or
Bash:
ceph-volume lvm activate 0 13bcca31-06ca-47bd-b7fc-093cf0432bf6
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-05a4b3dd-0c87-49a0-aa75-d567bbe69f96/osd-block-13bcca31-06ca-47bd-b7fc-093cf0432bf6 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Running command: /usr/bin/ln -snf /dev/ceph-05a4b3dd-0c87-49a0-aa75-d567bbe69f96/osd-block-13bcca31-06ca-47bd-b7fc-093cf0432bf6 /var/lib/ceph/osd/ceph-0/block
Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-6
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-13bcca31-06ca-47bd-b7fc-093cf0432bf6
Running command: /usr/bin/systemctl enable --runtime ceph-osd@0
Running command: /usr/bin/systemctl start ceph-osd@0
--> ceph-volume lvm activate successful for osd ID: 0

Or
Bash:
ceph-volume simple scan /var/lib/ceph/osd/ceph-0/
 stderr: lsblk: /var/lib/ceph/osd/ceph-0: not a block device
 stderr: lsblk: /var/lib/ceph/osd/ceph-0: not a block device
 stderr: Bad argument "/var/lib/ceph/osd/ceph-0", expected an absolute path in /dev/ or /sys or a unit name: Invalid argument
Running command: /usr/sbin/cryptsetup status tmpfs
 stderr: blkid: error: tmpfs: No such file or directory
 stderr: lsblk: tmpfs: not a block device
--> Ignoring /var/lib/ceph/osd/ceph-0 because it's not a ceph-disk created osd.


ceph-volume simple scan /dev/nvme0n1
usage: ceph-volume simple scan [-h] [-f] [--stdout] [OSD_PATH]
ceph-volume simple scan: error: Argument is not a directory or device which is required to scan
 
do you still have the monitors?

Bash:
--> ceph-volume lvm activate successful for osd ID: 0
shows it was successfully activated

what does 'ceph status' and 'ceph osd df' say?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!