Need help removing old CEPH configration from drives

shankwc

New Member
Aug 11, 2020
3
0
1
53
I have hardware that I want to try Proxmox on with ZFS storage. Previously, I test a canned CEPH cluster on this hardware. Now, whenever I try to create the ZFS RAIDZ it will fail and then the Usage section of the Disk screen says some of the hard disks are Ceph osd.N (Bluestore). I've gone into each disk and removed the partition with fdisk. I usually get an error that it's busy and will be removed on reboot. After reboot, it will list the drive as No (under usage) and I've tried the "Initialize with GPT" button but it ends up coming back. What can I do to completely remove any reference to CEPH on these drives or in my Proxmox node?

Thanks
 
YOu can run sgdisk -Z /path/to/disk which should zap any remaining partition tables.
 
Thank you. That looked very promising. I ran it on each of the disks and rebooted. None of them shows any OSD references, but when I tried to create the ZFS it failed and now one of them says its a CEPH OSD again. Could this be getting picked up in proxmox somehow?
 
Also, here is the meesage i get when trying to run sgdisk again on the disk with the OSD reference

root@psan01:~# sgdisk -Z /dev/sdb
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot or after you
run partprobe(8) or kpartx(8)
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
 
Just come to here after faced similar issue. I'm trying sgdisk -Z but does help. After read ceph documentation I found:

NB: replace /dev/sdX with your sdX drive
Code:
#ceph-volume lvm zap --destroy /dev/sdb

Also important, if you want to make sure which drive contains ceph then use:

Code:
# ceph-volume inventory

Device Path               Size         rotates available Model name
/dev/sda                  2.18 TB      True    True      EG002400JWJNT
/dev/sdb                  2.18 TB      True    False     EG002400JWJNT
/dev/sdc                  2.18 TB      True    False     EG002400JWJNT
Above result tell you that /dev/sda is available, mean ceph maybe on /dev/sdb or /dev/sdc. In my case /dev/sdc used by ceph.

HTH
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!