[SOLVED] Can't wipe old boot drive

Daxcor

Member
Oct 31, 2021
41
2
13
57
I have upgraded my cluster to new boot drives using zfs1 mirroring on sdc and sdd. All is going well. I want to remove the old drives from the chassis. I wanted to wipe the drives using proxmox before I do. I am trying to remove sda When I try to wipe the drive I get the error `disk/partition '/dev/sda3' has a holder (500)` google foo says that process # 500 is holding. Here is what process 500 is `500 ? I 0:02 [kworker/23:1-mm_percpu_wq]`. I know I can just pull the drives, however I don't want to crash the proxmox cluster if it is indeed using that drive.

Can anyone shed some light on what is happening and is it safe to pull the drives?

Thanks
Brad

Code:
root@pve0:~# lsblk
NAME                                                                                                  MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                                                                                                     8:0    0 223.6G  0 disk
├─sda1                                                                                                  8:1    0  1007K  0 part
├─sda2                                                                                                  8:2    0     1G  0 part
└─sda3                                                                                                  8:3    0 222.6G  0 part
  ├─pve-swap                                                                                          252:1    0     8G  0 lvm
  ├─pve-root                                                                                          252:2    0  65.6G  0 lvm
  ├─pve-data_tmeta                                                                                    252:3    0   1.3G  0 lvm
  │ └─pve-data-tpool                                                                                  252:5    0 130.3G  0 lvm
  │   └─pve-data                                                                                      252:6    0 130.3G  1 lvm
  └─pve-data_tdata                                                                                    252:4    0 130.3G  0 lvm
    └─pve-data-tpool                                                                                  252:5    0 130.3G  0 lvm
      └─pve-data                                                                                      252:6    0 130.3G  1 lvm
sdb                                                                                                     8:16   0   3.6T  0 disk
└─ceph--b3a9822c--520c--496a--9410--0421c6793e4c-osd--block--01070731--205a--45d0--b0fb--3b48d4f68f64 252:0    0   3.6T  0 lvm
sdc                                                                                                     8:32   0 931.5G  0 disk
├─sdc1                                                                                                  8:33   0  1007K  0 part
├─sdc2                                                                                                  8:34   0     1G  0 part
└─sdc3                                                                                                  8:35   0   930G  0 part
sdd                                                                                                     8:48   0 931.5G  0 disk
├─sdd1                                                                                                  8:49   0  1007K  0 part
├─sdd2                                                                                                  8:50   0     1G  0 part
└─sdd3                                                                                                  8:51   0   930G  0 part
 
From another post

Hi,
if the disk is actively in use, it cannot be cleanly wiped (the old user might still think it owns the disk afterwards...). For the LVM disks, check the output of pvsand remove the volume groups on the disks you want to wipe with vgremove. For the device-mapped disks, check with dmsetup ls and remove the mapping with dmsetup remove.