Hi all,
I think i have a weird behaviour, that i cannot resolve...
I had to reinstall completely my cluster of 3 nodes (PVE).
Previousily on these nodes, i had a ceph activated, and OSDs created.. All machine have 2 SSD disks.
I am trying to redo the ceph configuration but it seems that i missed (maybe??) to destroy ther OSD before reinstall all PVE nodes !
Cause apparently i am blocked...
- On the 1st node : I see only one sdX (
- On the 2nd node : I see the 2 sdX (
- On the 3rd node : I see the 2 sdX (
I cannot Wipe Disk as error message is
Is there anything i can do via CLI to unlock the disks or destroy an old OSD conf ?
And so i can see the /dev/sdb on the 1st node ?
Thanks for your help
I think i have a weird behaviour, that i cannot resolve...
I had to reinstall completely my cluster of 3 nodes (PVE).
Previousily on these nodes, i had a ceph activated, and OSDs created.. All machine have 2 SSD disks.
I am trying to redo the ceph configuration but it seems that i missed (maybe??) to destroy ther OSD before reinstall all PVE nodes !
Cause apparently i am blocked...
- On the 1st node : I see only one sdX (
/dev/sda
) and this sda in "Disks/Usage" is "LVM, Ceph (OSD.1)"- On the 2nd node : I see the 2 sdX (
/dev/sda
and /dev/sdb
) and this sda in "Disks/Usage" are "LVM, Ceph (OSD.2)" & "LVM, Ceph (OSD.3)"- On the 3rd node : I see the 2 sdX (
/dev/sda
and /dev/sdb
) and this sda in "Disks/Usage" are "LVM, Ceph (OSD.4)" & "LVM, Ceph (OSD.5)"I cannot Wipe Disk as error message is
Code:
disk/partition '/dev/sda' has a holder (500)
Is there anything i can do via CLI to unlock the disks or destroy an old OSD conf ?
And so i can see the /dev/sdb on the 1st node ?
Thanks for your help
Last edited: