[SOLVED] Proxmox VE reinstall and old CEPH blocking disks

dertirio

New Member
Aug 16, 2023
4
0
1
Hi all,
I think i have a weird behaviour, that i cannot resolve...

I had to reinstall completely my cluster of 3 nodes (PVE).
Previousily on these nodes, i had a ceph activated, and OSDs created.. All machine have 2 SSD disks.

I am trying to redo the ceph configuration but it seems that i missed (maybe??) to destroy ther OSD before reinstall all PVE nodes !
Cause apparently i am blocked...

- On the 1st node : I see only one sdX (/dev/sda) and this sda in "Disks/Usage" is "LVM, Ceph (OSD.1)"
- On the 2nd node : I see the 2 sdX (/dev/sda and /dev/sdb) and this sda in "Disks/Usage" are "LVM, Ceph (OSD.2)" & "LVM, Ceph (OSD.3)"
- On the 3rd node : I see the 2 sdX (/dev/sda and /dev/sdb) and this sda in "Disks/Usage" are "LVM, Ceph (OSD.4)" & "LVM, Ceph (OSD.5)"
I cannot Wipe Disk as error message is
Code:
disk/partition '/dev/sda' has a holder (500)

Is there anything i can do via CLI to unlock the disks or destroy an old OSD conf ?
And so i can see the /dev/sdb on the 1st node ?

Thanks for your help
 
Last edited:
Hi all,
I think i have a weird behaviour, that i cannot resolve...

I had to reinstall completely my cluster of 3 nodes (PVE).
Previousily on these nodes, i had a ceph activated, and OSDs created.. All machine have 2 SSD disks.

I am trying to redo the ceph configuration but it seems that i missed (maybe??) to destroy ther OSD before reinstall all PVE nodes !
Cause apparently i am blocked...

- On the 1st node : I see only one sdX (/dev/sda) and this sda in "Disks/Usage" is "LVM, Ceph (OSD.1)"
- On the 2nd node : I see the 2 sdX (/dev/sda and /dev/sdb) and this sda in "Disks/Usage" are "LVM, Ceph (OSD.2)" & "LVM, Ceph (OSD.3)"
- On the 3rd node : I see the 2 sdX (/dev/sda and /dev/sdb) and this sda in "Disks/Usage" are "LVM, Ceph (OSD.4)" & "LVM, Ceph (OSD.5)"
I cannot Wipe Disk as error message is
Code:
disk/partition '/dev/sda' has a holder (500)

Is there anything i can do via CLI to unlock the disks or destroy an old OSD conf ?
And so i can see the /dev/sdb on the 1st node ?

Thanks for your help

SOLVED
I in fact forgot to destroy the LVM volumes !!
Now all is good
Sorry for this idiotic question
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!