We have a 3 node cluster and a SAN that is connected via iscsi (using lvm over iscsi).
I had to do some maintenance and move machines around and i noticed that one specific machine could not be migrated between nodes because of errors related to it's storage.
Initially it gave me some errors when i wanted to migrate the machine to another node, along the lines of
I migrated the disk to another storage (one cluster member's nfs shared disk), it went ok.
Now i wanted to migrate it back to the SAN and i still get errors like this:
lvscan and lvdisplay does not show "vm-135-disk-0". I suppose something is stuck somewhere but i have no idea. I migrated other machines' disks to and from the SAN LVM succesfully.
Where should i start debugging this?
I had to do some maintenance and move machines around and i noticed that one specific machine could not be migrated between nodes because of errors related to it's storage.
Initially it gave me some errors when i wanted to migrate the machine to another node, along the lines of
Code:
device-mapper: create ioctl on san_lvm_volgroup-vm--135--disk--0 LVM-6kXN8w5HcxnsGojMrnMBI1IaPJ0kS3YSQRpboe0uwleV3UNQq8mJOlkrSDupKSi3 failed: Device or resource busy
I migrated the disk to another storage (one cluster member's nfs shared disk), it went ok.
Now i wanted to migrate it back to the SAN and i still get errors like this:
Code:
create full clone of drive scsi0 (srv005:135/vm-135-disk-0.qcow2)
device-mapper: create ioctl on san_lvm_volgroup-vm--135--disk--0 LVM-6kXN8w5HcxnsGojMrnMBI1IaPJ0kS3YSQRpboe0uwleV3UNQq8mJOlkrSDupKSi3 failed: Device or resource busy
TASK ERROR: storage migration failed: error with cfs lock 'storage-zesan-lvm': lvcreate 'san_lvm_volgroup/vm-135-disk-0' error: Failed to activate new LV san_lvm_volgroup/vm-135-disk-0.
lvscan and lvdisplay does not show "vm-135-disk-0". I suppose something is stuck somewhere but i have no idea. I migrated other machines' disks to and from the SAN LVM succesfully.
Where should i start debugging this?