cannot migrate - device-mapper:create ioctl on cluster failed

svacaroaia

Member
Oct 4, 2012
36
0
6
Hi,

I cannot migrate a newly created VM on any of my nodes because its disk is not available

Scenario
create a new VM on primary node - all is well - see below the status of the volumes
root@blh02-14:/backups# vgchange -ay 1 logical volume(s) in volume group "backup" now active
1 logical volume(s) in volume group "iso-volume" now active
3 logical volume(s) in volume group "pve" now active
75 logical volume(s) in volume group "cluster01-vol" now active

cannot migrate it because
"TASK
ERROR: can't activate LV '/dev/cluster01-vol/vm-349-disk-2': device-mapper: create ioctl on cluster01--vol-vm--349--disk--2 failed: Device or resource busy"







The only way to resolve this issue is to restart the nodes
This seems to happen after I delete/remove and then recreate the disks from Proxmox GUI

I've read somewhere that there is a way to fix this using dmsetup

I'll appreciate any hints / help / specific instructions

Thanks
Steven

3 logical volume(s) in volume group "pve" now active
device-mapper: create ioctl on cluster01--vol-vm--347--disk--1 failed: Device or resource busy
device-mapper: create ioctl on cluster01--vol-vm--348--disk--1 failed: Device or resource busy
device-mapper: create ioctl on cluster01--vol-vm--349--disk--2 failed: Device or resource busy
device-mapper: create ioctl on cluster01--vol-vm--392--disk--1 failed: Device or resource busy
71 logical volume(s) in volume group "cluster01-vol" now active
 
dmsetup remove [-f|--force] [--retry] device_name

remove [-f|--force] [--retry] device_name
Removes a device. It will no longer be visible to dmsetup.
Open devices cannot be removed except with older kernels that
contain a version of device-mapper prior to 4.8.0. In this case
the device will be deleted when its open_count drops to zero.
From version 4.8.0 onwards, if a device can't be removed because
an uninterruptible process is waiting for I/O to return from it,
adding --force will replace the table with one that fails all
I/O, which might allow the process to be killed. If an attempt
to remove a device fails, perhaps because a process run from a
quick udev rule temporarily opened the device, the --retry
option will cause the operation to be retried for a few seconds
before failing.
 
sorry for digging out thread from 2012 but it happened to me as well with proxmox 6.3-3.

I have SCSI array, created LVM on top of the LUN. Multipath is configured. LVM is enabled as a shared storage.

What could be the issue of it?
 
Hi,
I can reproduce this here, but only if I remove the code that's responsible for deactivating the volumes on the old node after migration.

Most likely there was some error in the past that led to the volume not being deactivated correctly, thus having the old device mapper entry being still present on the current target node. Please try @mir 's solution.

Maybe you even can find the error that deactivating the volume failed in the old migration logs. It should be in the last migration of the VM from the current target node to another node.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!