lvremove 'vg2/vm-102-disk-1' error: Unable to deactivate logical volume "vm-102-disk-

c0mputerking

Renowned Member
Oct 5, 2011
174
5
83
I cannot seem to remove an logical volume created by proxmox i think i have removed all related entries for virtual machines that use the vg for storage as well as all storage and backup entries that might use this vg ... see title for the error i have also tried the below commands on the CLI without success ... as you can see i am tring to remove a volume group and it deletes the logical volume i have created but not the one created by proxmox (using version pve-manager/2.3/7946f1f1)


lvchange -f -a n /dev/vg2/vm-102-disk-1
device-mapper: remove ioctl on failed: Device or resource busy
<snip about 10 of these>
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy

Unable to deactivate vg2-vm--102--disk--1 (253:4)


vgremove /dev/vg2
Do you really want to remove volume group "vg2" containing 3 logical volumes? [y/n]: y
Logical volume "backups" successfully removed
Logical volume "backuppc" successfully removed
Do you really want to remove active logical volume vm-102-disk-1? [y/n]: y
device-mapper: remove ioctl on failed: Device or resource busy
<snip about 10 of these>
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy

Unable to deactivate vg2-vm--102--disk--1 (253:4)
Unable to deactivate logical volume "vm-102-disk-1"
 
Re: lvremove 'vg2/vm-102-disk-1' error: Unable to deactivate logical volume "vm-102-d

As the error message states , the resource is busy, it means there is something using it.
Try a fuser -m /dev/vg2/vm-102-disk-1 to find out what process is using it.
You might still have a running VM on it?
 
Re: lvremove 'vg2/vm-102-disk-1' error: Unable to deactivate logical volume "vm-102-d

thanks for our reply and i got it sorted had removed the harddrive/image twice from hardware but i could still not remove the logical volume until i rebooted the offending vm