[SOLVED] Proxmox VE reinstall and old CEPH blocking disks

dertirio

Member
Aug 16, 2023
4
0
6
Hi all,
I think i have a weird behaviour, that i cannot resolve...

I had to reinstall completely my cluster of 3 nodes (PVE).
Previousily on these nodes, i had a ceph activated, and OSDs created.. All machine have 2 SSD disks.

I am trying to redo the ceph configuration but it seems that i missed (maybe??) to destroy ther OSD before reinstall all PVE nodes !
Cause apparently i am blocked...

- On the 1st node : I see only one sdX (/dev/sda) and this sda in "Disks/Usage" is "LVM, Ceph (OSD.1)"
- On the 2nd node : I see the 2 sdX (/dev/sda and /dev/sdb) and this sda in "Disks/Usage" are "LVM, Ceph (OSD.2)" & "LVM, Ceph (OSD.3)"
- On the 3rd node : I see the 2 sdX (/dev/sda and /dev/sdb) and this sda in "Disks/Usage" are "LVM, Ceph (OSD.4)" & "LVM, Ceph (OSD.5)"
I cannot Wipe Disk as error message is
Code:
disk/partition '/dev/sda' has a holder (500)

Is there anything i can do via CLI to unlock the disks or destroy an old OSD conf ?
And so i can see the /dev/sdb on the 1st node ?

Thanks for your help
 
Last edited:
Hi all,
I think i have a weird behaviour, that i cannot resolve...

I had to reinstall completely my cluster of 3 nodes (PVE).
Previousily on these nodes, i had a ceph activated, and OSDs created.. All machine have 2 SSD disks.

I am trying to redo the ceph configuration but it seems that i missed (maybe??) to destroy ther OSD before reinstall all PVE nodes !
Cause apparently i am blocked...

- On the 1st node : I see only one sdX (/dev/sda) and this sda in "Disks/Usage" is "LVM, Ceph (OSD.1)"
- On the 2nd node : I see the 2 sdX (/dev/sda and /dev/sdb) and this sda in "Disks/Usage" are "LVM, Ceph (OSD.2)" & "LVM, Ceph (OSD.3)"
- On the 3rd node : I see the 2 sdX (/dev/sda and /dev/sdb) and this sda in "Disks/Usage" are "LVM, Ceph (OSD.4)" & "LVM, Ceph (OSD.5)"
I cannot Wipe Disk as error message is
Code:
disk/partition '/dev/sda' has a holder (500)

Is there anything i can do via CLI to unlock the disks or destroy an old OSD conf ?
And so i can see the /dev/sdb on the 1st node ?

Thanks for your help

SOLVED
I in fact forgot to destroy the LVM volumes !!
Now all is good
Sorry for this idiotic question