[PVE7] - wipe disk doesn't work in GUI

Hi,

this information is gathered via lsblk, so it seems like in some cases the old status is still floating around somewhere (from a quick glance it seems to be the udev database). I opened a bug report.

If somebody else stumbles upon this, please try using udevadm settle and let us know if it worked.

I had the same problem on a bare metal server. udevadm settle didn't work but I could solve it with:
Bash:
> lsblk
NAME                                                                                                  MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda                                                                                                     8:0    1 10.9T  0 disk
└─ceph--8bfcdef9--aa75--4754--8f25--43c70633a846-osd--block--3d6ffd71--e0b1--4b39--a1b7--61bf9154fd28 253:0    0 10.9T  0 lvm
...

> dmsetup remove ceph--8bfcdef9--aa75--4754--8f25--43c70633a846-osd--block--3d6ffd71--e0b1--4b39--a1b7--61bf9154fd28

The issue is described here: https://tracker.ceph.com/issues/24793
 
Hi,

this information is gathered via lsblk, so it seems like in some cases the old status is still floating around somewhere (from a quick glance it seems to be the udev database). I opened a bug report.

If somebody else stumbles upon this, please try using udevadm settle udevadm trigger /dev/XYZ and let us know if it worked.

EDIT: replace command with one that should actually help.

This after dd zerod finally got me working again.
Mine was a result of an abandoned proxmox install with several vms on it leftover on the ssd when i swapped over and fresh installed proxmox and new vms to an nvme.
 
The issue is that the OSDs logical volumes are active, and are protected form deletion just like a mounted filesystem.
A logical volume has to be inactive before it can be deleted.
The following shows how to deactivate and remove logical volumes, group volumes, and physical volumes.

!!!!NOTE This will result in target disks' data loss. Take the usual relevant precautions!!!!!

General advice: treat the GUI as a convenience, always learn the CLI commands. Some commands or command options are only available at the CLI.

The following commands are executed by the root user on the targeted host.
Identify the OSDs/disks using the "lsblk -Sf" command: -S scsi devices only, -f output filesystem information.
The target OSD are disks /dev/sdb and /dev/sdc.

List logical volumes, their status, and related volume group.
[#] lvscan
Status Volume Group Logical Volume
active '/dev/ceph-6d9XXXXXX/osd-block-e5XXXXXX'
active '/dev/ceph-993XXXXXX/osd-block-a3XXXXXX'
...

Deactivate Logical Volumes
[#] lvchange -a n osd-block-e5XXXXXX
[#] lvchange -a n osd-block-a3XXXXXX

List logical volumes, their status, and related volume group.
[#] lvscan
Status Volume Group Logical Volume
inactive '/dev/ceph-6d9XXXXXX/osd-block-e5XXXXXX'
inactive '/dev/ceph-993XXXXXX/osd-block-a3XXXXXX'

Remove Logical Volume
[#] lvremove osd-block-e5XXXXXX
[#] lvremove osd-block-a3XXXXXX

Remove Volume Group
[#] vgremove ceph-6d9XXXXXX
[#] vgremove ceph-993XXXXXX

List Physical Volume
[#] pvs
PV VG
/dev/sdb ceph-f4XXXXXX
/dev/sdc ceph-f7aXXXXX

Remove Physical Volume
[#] pvremove /dev/sdb
[#] pvremove /dev/sdc

Wipe filesystem, raid, partition signatures
[#] wipefs -a /dev/sdb
[#] wipefs -a /dev/sdc

Zap the disk (not needed, but to satisfy one's paranoia.)
[#] ceph-volume lvm zap /dev/sdb --destroy
[#] ceph-volume lvm zap /dev/sdc --destroy

The disks are now ready for reuse.

Done!
 
I find this ghost lock happens most often because of stale devicemapper handles. Not sure why it happens but it only seems to ever happen with Proxmox, so I suspect some cleanup from GUI operations not happening when it should. If you get this error, check the device mapper table for ghost targets and purge them:

Code:
root@richie:/farm/acre0/plots# dmsetup info -C
Name                      Maj Min Stat Open Targ Event  UUID                                                                     
LVM--thin-LVM--thin       253   2 L--w    0    1      0 LVM-syem7o0hxCRuZcERq73OY2j6uzq9K6C6pUulgFjzYGvMUxhxDcG0jkw6VyvzP7Nu-tpool
LVM--thin-LVM--thin_tdata 253   1 L--w    1    1      0 LVM-syem7o0hxCRuZcERq73OY2j6uzq9K6C60iqqsUIaEKZgow5fbtKnsXqjIA9qMclX-tdata
LVM--thin-LVM--thin_tmeta 253   0 L--w    1    1      0 LVM-syem7o0hxCRuZcERq73OY2j6uzq9K6C6anmSLKWNsw9UNqWQpAQtBFHCESZENUN7-tmeta
root@richie:/farm/acre0/plots# dmsetup remove LVM--thin-LVM--thin
root@richie:/farm/acre0/plots# dmsetup remove LVM--thin-LVM--thin_tdata
root@richie:/farm/acre0/plots# dmsetup remove LVM--thin-LVM--thin_tmeta
root@richie:/farm/acre0/plots# dmsetup info -C
No devices found
root@richie:/farm/acre0/plots# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.
root@richie:/farm/acre0/plots#
 
The issue is that the OSDs logical volumes are active, and are protected form deletion just like a mounted filesystem.
A logical volume has to be inactive before it can be deleted.
The following shows how to deactivate and remove logical volumes, group volumes, and physical volumes.

!!!!NOTE This will result in target disks' data loss. Take the usual relevant precautions!!!!!

General advice: treat the GUI as a convenience, always learn the CLI commands. Some commands or command options are only available at the CLI.

The following commands are executed by the root user on the targeted host.
Identify the OSDs/disks using the "lsblk -Sf" command: -S scsi devices only, -f output filesystem information.
The target OSD are disks /dev/sdb and /dev/sdc.

List logical volumes, their status, and related volume group.
[#] lvscan
Status Volume Group Logical Volume
active '/dev/ceph-6d9XXXXXX/osd-block-e5XXXXXX'
active '/dev/ceph-993XXXXXX/osd-block-a3XXXXXX'
...

Deactivate Logical Volumes
[#] lvchange -a n osd-block-e5XXXXXX
[#] lvchange -a n osd-block-a3XXXXXX

List logical volumes, their status, and related volume group.
[#] lvscan
Status Volume Group Logical Volume
inactive '/dev/ceph-6d9XXXXXX/osd-block-e5XXXXXX'
inactive '/dev/ceph-993XXXXXX/osd-block-a3XXXXXX'

Remove Logical Volume
[#] lvremove osd-block-e5XXXXXX
[#] lvremove osd-block-a3XXXXXX

Remove Volume Group
[#] vgremove ceph-6d9XXXXXX
[#] vgremove ceph-993XXXXXX

List Physical Volume
[#] pvs
PV VG
/dev/sdb ceph-f4XXXXXX
/dev/sdc ceph-f7aXXXXX

Remove Physical Volume
[#] pvremove /dev/sdb
[#] pvremove /dev/sdc

Wipe filesystem, raid, partition signatures
[#] wipefs -a /dev/sdb
[#] wipefs -a /dev/sdc

Zap the disk (not needed, but to satisfy one's paranoia.)
[#] ceph-volume lvm zap /dev/sdb --destroy
[#] ceph-volume lvm zap /dev/sdc --destroy

The disks are now ready for reuse.

Done!
thanx
i was able to wipe disk in gui right after deactivating the logical volume
 
  • Like
Reactions: hotnoob
had similar issue. was able to solve by simply going to gui,
removing lvm for specific disc, and then wiping from disk menu.

ty
 
I had the same problem on a bare metal server. udevadm settle didn't work but I could solve it with:
Bash:
> lsblk
NAME                                                                                                  MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda                                                                                                     8:0    1 10.9T  0 disk
└─ceph--8bfcdef9--aa75--4754--8f25--43c70633a846-osd--block--3d6ffd71--e0b1--4b39--a1b7--61bf9154fd28 253:0    0 10.9T  0 lvm
...

> dmsetup remove ceph--8bfcdef9--aa75--4754--8f25--43c70633a846-osd--block--3d6ffd71--e0b1--4b39--a1b7--61bf9154fd28

The issue is described here: https://tracker.ceph.com/issues/24793
This was it for my case where I had to nuke and purge the ceph cluster and cleanup the disks.
 
I find this ghost lock happens most often because of stale devicemapper handles. Not sure why it happens but it only seems to ever happen with Proxmox, so I suspect some cleanup from GUI operations not happening when it should. If you get this error, check the device mapper table for ghost targets and purge them:

Code:
root@richie:/farm/acre0/plots# dmsetup info -C
Name                      Maj Min Stat Open Targ Event  UUID                                                                    
LVM--thin-LVM--thin       253   2 L--w    0    1      0 LVM-syem7o0hxCRuZcERq73OY2j6uzq9K6C6pUulgFjzYGvMUxhxDcG0jkw6VyvzP7Nu-tpool
LVM--thin-LVM--thin_tdata 253   1 L--w    1    1      0 LVM-syem7o0hxCRuZcERq73OY2j6uzq9K6C60iqqsUIaEKZgow5fbtKnsXqjIA9qMclX-tdata
LVM--thin-LVM--thin_tmeta 253   0 L--w    1    1      0 LVM-syem7o0hxCRuZcERq73OY2j6uzq9K6C6anmSLKWNsw9UNqWQpAQtBFHCESZENUN7-tmeta
root@richie:/farm/acre0/plots# dmsetup remove LVM--thin-LVM--thin
root@richie:/farm/acre0/plots# dmsetup remove LVM--thin-LVM--thin_tdata
root@richie:/farm/acre0/plots# dmsetup remove LVM--thin-LVM--thin_tmeta
root@richie:/farm/acre0/plots# dmsetup info -C
No devices found
root@richie:/farm/acre0/plots# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.
root@richie:/farm/acre0/plots#

Really risky, confusion is easy with this method.

I recommand to use "lsblk /dev/sdX" command to target expected disk instead of "dmsetup info -C".
 
I had the same problem on a bare metal server. udevadm settle didn't work but I could solve it with:
Bash:
> lsblk
NAME                                                                                                  MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda                                                                                                     8:0    1 10.9T  0 disk
└─ceph--8bfcdef9--aa75--4754--8f25--43c70633a846-osd--block--3d6ffd71--e0b1--4b39--a1b7--61bf9154fd28 253:0    0 10.9T  0 lvm
...

> dmsetup remove ceph--8bfcdef9--aa75--4754--8f25--43c70633a846-osd--block--3d6ffd71--e0b1--4b39--a1b7--61bf9154fd28

The issue is described here: https://tracker.ceph.com/issues/24793

Yes this is the brutal but efficient solution !



@Gilberto Ferreira: Could you please mark this thread as solved ?
@t.lamprecht: Could you revert it in pve-ceph features and documentation ?
 
FWIW, I just started to move a VM disk from one LVM volume to another, then decided better of it, so I stopped the process. I then wanted to remove the VM partial disk that had been copied and saw a prompt saying I could not do this and had to remove it from the VM hardware. I wanted the disk back, though, so I thought I would wipe it and start again, and I received precisely this error.

Code:
disk/partition '/dev/sdb' has a holder (500)

FWIW, even dd if=/dev/zero of=/dev/sdb did not work initially. Only after a reboot was the drive able to be wiped

Annoyingly, though, I can't re-initialise the disk with the same volume name.
Code:
Storage ID 'vm1' already exists on node traning (500)

It doesn't as it was nuked.
 
Last edited:
Hi,
Annoyingly, though, I can't re-initialise the disk with the same volume name.
Code:
Storage ID 'vm1' already exists on node traning (500)

It doesn't as it was nuked.
likely it's still in the storage configuration. See cat /etc/pve/storage.cfg. Note that this configuration is shared across the cluster. But if you don't have the same storage on another node, use pvesm remove vm1 to remove it.
 
I find this ghost lock happens most often because of stale devicemapper handles. Not sure why it happens but it only seems to ever happen with Proxmox, so I suspect some cleanup from GUI operations not happening when it should. If you get this error, check the device mapper table for ghost targets and purge them:

Code:
root@richie:/farm/acre0/plots# dmsetup info -C
Name                      Maj Min Stat Open Targ Event  UUID                                                                    
LVM--thin-LVM--thin       253   2 L--w    0    1      0 LVM-syem7o0hxCRuZcERq73OY2j6uzq9K6C6pUulgFjzYGvMUxhxDcG0jkw6VyvzP7Nu-tpool
LVM--thin-LVM--thin_tdata 253   1 L--w    1    1      0 LVM-syem7o0hxCRuZcERq73OY2j6uzq9K6C60iqqsUIaEKZgow5fbtKnsXqjIA9qMclX-tdata
LVM--thin-LVM--thin_tmeta 253   0 L--w    1    1      0 LVM-syem7o0hxCRuZcERq73OY2j6uzq9K6C6anmSLKWNsw9UNqWQpAQtBFHCESZENUN7-tmeta
root@richie:/farm/acre0/plots# dmsetup remove LVM--thin-LVM--thin
root@richie:/farm/acre0/plots# dmsetup remove LVM--thin-LVM--thin_tdata
root@richie:/farm/acre0/plots# dmsetup remove LVM--thin-LVM--thin_tmeta
root@richie:/farm/acre0/plots# dmsetup info -C
No devices found
root@richie:/farm/acre0/plots# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.
root@richie:/farm/acre0/plots#

This was the solution for my problem as well, thx a lot!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!