At first the output of the error message from the Proxmox GUI:
create full clone of drive virtio0 (vms:vm-993-disk-1)
device-mapper: create ioctl on vms-vm--109--disk--1 failed: Device or resource busy
TASK ERROR: clone failed: lvcreate 'vms/pve-vm-109' error: Failed to activate new LV.
It happens always (reproducable) when a vm is deleted and a new vm is created or cloned with the once used vmid as target vmid. Investigating this matter I found that the device mapper isn't releasing the logical volumes after the vm has been deleted:
root@phase-n1:/dev# dmsetup table | grep 109
vms-vm--109--disk--2: 0 104857600 linear 251:0 2712145920
vms-vm--109--disk--1: 0 20971520 linear 251:0 8017580032
Which can be worked around with dmsetup remove vms-vm--109--disk--1. After this step the creation/cloning is working again for the specific vmid.
We are using a direct attached storage with a 5 computing node setup and LVM to manage isolation of the
vm disks (this is working for kvm and lxc).
This is our version:
proxmox-ve: 4.4-87 (running kernel: 4.4.59-1-pve)
pve-manager: 4.4-13 (running version: 4.4-13/7ea56165)
pve-kernel-4.4.35-1-pve: 4.4.35-77
pve-kernel-4.4.59-1-pve: 4.4.59-87
pve-kernel-4.4.44-1-pve: 4.4.44-84
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-2~pve4+1
libqb0: 1.0.1-1
pve-cluster: 4.0-49
qemu-server: 4.0-110
pve-firmware: 1.1-11
libpve-common-perl: 4.0-94
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-2
pve-docs: 4.4-4
pve-qemu-kvm: 2.7.1-4
pve-container: 1.0-99
pve-firewall: 2.0-33
pve-ha-manager: 1.0-40
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-4
lxcfs: 2.0.6-pve1
criu: 1.6.0-1
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.9-pve15~bpo80
I don't know if this has something to do with our DAS setup or if this is a bug within Proxmox in general (maybe some call isn't reaching the device mapper or so?). The DAS itself shouldn't be the problem as this is only mapped into the nodes as a regular physical lvm volume with a volume group (configured via CLI) provided to Proxmox (configured via the GUI).
create full clone of drive virtio0 (vms:vm-993-disk-1)
device-mapper: create ioctl on vms-vm--109--disk--1 failed: Device or resource busy
TASK ERROR: clone failed: lvcreate 'vms/pve-vm-109' error: Failed to activate new LV.
It happens always (reproducable) when a vm is deleted and a new vm is created or cloned with the once used vmid as target vmid. Investigating this matter I found that the device mapper isn't releasing the logical volumes after the vm has been deleted:
root@phase-n1:/dev# dmsetup table | grep 109
vms-vm--109--disk--2: 0 104857600 linear 251:0 2712145920
vms-vm--109--disk--1: 0 20971520 linear 251:0 8017580032
Which can be worked around with dmsetup remove vms-vm--109--disk--1. After this step the creation/cloning is working again for the specific vmid.
We are using a direct attached storage with a 5 computing node setup and LVM to manage isolation of the
vm disks (this is working for kvm and lxc).
This is our version:
proxmox-ve: 4.4-87 (running kernel: 4.4.59-1-pve)
pve-manager: 4.4-13 (running version: 4.4-13/7ea56165)
pve-kernel-4.4.35-1-pve: 4.4.35-77
pve-kernel-4.4.59-1-pve: 4.4.59-87
pve-kernel-4.4.44-1-pve: 4.4.44-84
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-2~pve4+1
libqb0: 1.0.1-1
pve-cluster: 4.0-49
qemu-server: 4.0-110
pve-firmware: 1.1-11
libpve-common-perl: 4.0-94
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-2
pve-docs: 4.4-4
pve-qemu-kvm: 2.7.1-4
pve-container: 1.0-99
pve-firewall: 2.0-33
pve-ha-manager: 1.0-40
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-4
lxcfs: 2.0.6-pve1
criu: 1.6.0-1
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.9-pve15~bpo80
I don't know if this has something to do with our DAS setup or if this is a bug within Proxmox in general (maybe some call isn't reaching the device mapper or so?). The DAS itself shouldn't be the problem as this is only mapped into the nodes as a regular physical lvm volume with a volume group (configured via CLI) provided to Proxmox (configured via the GUI).