[PVE7] - wipe disk doesn't work in GUI

Gilberto Ferreira

Renowned Member
HI

I have tried to wipe out an HDD which I previously using as Ceph OSD drive and get this message:


disk/partition '/dev/sdb' has a holder (500)

I could do wipefs from CLI but works only if I add -a and -f flags.
Back to web gui, I tried again and get the same error.

fdisk -l /dev/sdb
Disk /dev/sdb: 200 GiB, 214748364800 bytes, 419430400 sectors
Disk model: VBOX HARDDISK =================================>>>> As you can see, I was testing in VirtualBox.
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


pveversion --verbose
proxmox-ve: 7.0-2 (running kernel: 5.11.22-1-pve)
pve-manager: 7.0-9 (running version: 7.0-9/228c9caa)
pve-kernel-helper: 7.0-4
pve-kernel-5.11: 7.0-3
pve-kernel-5.11.22-1-pve: 5.11.22-2
ceph-fuse: 15.2.13-pve1
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.0.0-1+pve6
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.1.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-4
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-9
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-2
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.1-1
proxmox-backup-file-restore: 2.0.1-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.2-4
pve-cluster: 7.0-3
pve-container: 4.0-8
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.2-4
pve-ha-manager: 3.3-1
pve-i18n: 2.4-1
pve-qemu-kvm: 6.0.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-10
smartmontools: 7.2-1
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.4-pve1



Thanks.
 
Hi,
disk/partition '/dev/sdb' has a holder (500)
That means the kernel has still references open, and the API cannot know if it's safe to just wipe, the same error would also happen if one would try something dangerous like wiping the root PVE disk, that's why we actively decided against passing the --force flag to the wipefs command for now.
 
No idea what the VirtualBox disk driver does, but it seems so. Test on real HW or at least QEMU/KVM as virtualization, both work here as long as the disk is not actively mounted or otherwise actively used.
 
as long as the disk is not actively mounted or otherwise actively used.
But the disk is unmounted, although I notice that lvmdiskcan show the disck as LVM partition, because I had installed ceph in one installation before.
By the way, over cli i was able to wipe the HDD with dd if=/dev/zero of=/dev/sdb and that's it.
Thanks anyway
 
Last edited:
Hmm, yeah if LVM is showing it, it may already have a hold on it.
By the way, over cli i was able to wipe the HDD with dd if=/dev/zero of=/dev/sdb and that's it.
yeah, that normally does the trick quite good.
 
Hi,
did you use the --cleanup flag in the CLI or Cleanup Disks checkbox in the UI when you destroyed the OSD? Otherwise I suppose the LV on it was still active.
 
Ok, then it's because LVM likes to activate everything it finds ;) . We'll think about adding a force flag (maybe initially just for LVM) or something for deactivating logical volumes in a future version. Thanks for the report!
 
Sure, mostly posting for possible other readers to not suggest that it's a must. The shell integrated in the Proxmox VE web-interface can be used to execute a wipefs call with the force flag added, like Gilberto mentioned in the initial post, just ensure you actually got the right disk block device file to avoid destroying any important data.
 
Nope. Nope. Nope.

PVE7, fresh install. Put some disks from an old vmware installation in there, want to create a ZFS pool on them.
ZFS does not see any free disks, because all are marked as ddf_raid_member.

wipe disk in GUI does nothing. And please "we decided yadda yadda"... I can click "wipe disk", some are-you-sure-warning appears, I click yes, some progress bar appears. Done! appears and nothing. Whatever you decided, you decided wrong - because that's a bug. Simply put.

Of course you may see that different, because Virtual Environment 7.0-8 is flawless and bug-free already. ;-)

Now for the interesting part:

* wipefs -fa /dev/sda -> no errors, Proxmox GUI after reload still shows disk as ddf_raid_member
* dd if=/dev/zero of=/dev/sda bs=1M count=1000 -> no errors (obviously), Proxmox GUI after reload still shows disk as ddf_raid_member

Pretty stubborn these little bastards. Now all of the 8 disks are 10TB and I do not intend to do a dd of the whole disk, because that's 14hours for each disk, so I'd really appreciate if someone has an idea how to achieve a wipe.
 
wipe disk in GUI does nothing. And please "we decided yadda yadda"... I can click "wipe disk", some are-you-sure-warning appears, I click yes, some progress bar appears. Done! appears and nothing. Whatever you decided, you decided wrong - because that's a bug. Simply put.

Of course you may see that different, because Virtual Environment 7.0-8 is flawless and bug-free already. ;-)

Now for the interesting part:

* wipefs -fa /dev/sda -> no errors, Proxmox GUI after reload still shows disk as ddf_raid_member
* dd if=/dev/zero of=/dev/sda bs=1M count=1000 -> no errors (obviously), Proxmox GUI after reload still shows disk as ddf_raid_member

Pretty stubborn these little bastards. Now all of the 8 disks are 10TB and I do not intend to do a dd of the whole disk, because that's 14hours for each disk, so I'd really appreciate if someone has an idea how to achieve a wipe.
Yeah. It's simple: Proxmox GUI is buggy - does not show the correct status.
Moreover, Proxmox somewhere stores that wrong (not updated) status and therefore does not handle the disks right (i.e. being free for other use)

/dev/sda was wiped already, /dev/sdb wipe no problem:
DEVICE OFFSET TYPE UUID LABEL
Bash:
sdb    0x9187ffffe00 ddf_raid_member     
sdc    0x9187ffffe00 ddf_raid_member     
sdc    0x1fe         PMBR                 
sdd    0x9187ffffe00 ddf_raid_member     
sde    0x9187ffffe00 ddf_raid_member     
sdf    0x9187ffffe00 ddf_raid_member     
sdg    0x9187ffffe00 ddf_raid_member     
sdg    0x1fe         PMBR                 
sdh    0x9187ffffe00 ddf_raid_member     
root@pve:/# wipefs /dev/sdb
DEVICE OFFSET        TYPE            UUID LABEL
sdb    0x9187ffffe00 ddf_raid_member     
root@pve:/# wipefs -a /dev/sdb
/dev/sdb: 4 bytes were erased at offset 0x9187ffffe00 (ddf_raid_member): 11 de 11 de

Proxmox GUI (after Reload):

1628334234372.png
 
I've solved it now.
1628336963746.png

Wipefs works, but Proxmox 7 does see the correct status only after a reboot.
By "wipefs works" I mean both the wipefs issued via GUI as well as from command line. Just the GUI display is broken.
 
  • Like
Reactions: DerDanilo
Hi,
Wipefs works, but Proxmox 7 does see the correct status only after a reboot.
By "wipefs works" I mean both the wipefs issued via GUI as well as from command line. Just the GUI display is broken.
this information is gathered via lsblk, so it seems like in some cases the old status is still floating around somewhere (from a quick glance it seems to be the udev database). I opened a bug report.

If somebody else stumbles upon this, please try using udevadm settle udevadm trigger /dev/XYZ and let us know if it worked.

EDIT: replace command with one that should actually help.
 
Last edited:
on pve6.4
Code:
wipefs -fa /dev/nvme4n1
dd if=/dev/zero of=/dev/nvme4n1 bs=1M count=1000
udevadm settle
reboot
note after
Code:
udevadm settle
in pve after Reload drive showed still as lvm2 member

Code:
parted /dev/nvme4n1
p showed no partitions

i had tried many other things and glad that at least those steps made it so the ceph drive could be added.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!