activation of san lvs impossible

dt47

New Member
Dec 12, 2012
11
0
1
Hi, after todays update I tried to live-migrate a VM, which also has a partition in the SAN. I got the following error message:
Code:
[COLOR=#000000][FONT=tahoma]Okt 15 16:27:07 starting migration of VM 139 to node 'firle' (141.20.10.88)[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Okt 15 16:27:07 copying disk images[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Okt 15 16:27:07 starting VM 139 on remote node 'flush'[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Okt 15 16:27:08 can't activate LV '/dev/ffsan/vm-139-disk-1':   device-mapper: reload ioctl on  failed: Invalid argument[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Okt 15 16:27:08 ERROR: online migrate failure - command '/usr/bin/ssh -o 'BatchMode=yes' root@127.2.1.88 qm start 139 --stateuri tcp --skiplock --migratedfrom flash --machine pc-i440fx-1.4' failed: exit code 255[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Okt 15 16:27:08 aborting phase 2 - cleanup resources[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Okt 15 16:27:08 migrate_cancel[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Okt 15 16:27:08 ERROR: migration finished with problems (duration 00:00:01)[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]TASK ERROR: migration problems[/FONT][/COLOR]
At that point only flush was updated. So, as it was the only vm with a problem, after a while I decided to update flash too and wanted to see what it looked like after a reboot. The result was, that the vm wasnt bootable on either node.
Code:
[COLOR=#000000][FONT=tahoma]TASK ERROR: can't activate LV '/dev/ffsan/vm-139-disk-1':   device-mapper: reload ioctl on  failed: Invalid argument[/FONT][/COLOR]
Removing LVs from ffsan was possible, but trying to create them resulted in a similiar error message as above.
The updated packages were:
Code:
[AKTUALISIERUNG] base-files:amd64 7.1wheezy1 -> 7.1wheezy2[AKTUALISIERUNG] bootlogd:amd64 2.88dsf-41 -> 2.88dsf-41+deb7u1
[AKTUALISIERUNG] devscripts:amd64 2.12.6 -> 2.12.6+deb7u1
[AKTUALISIERUNG] dpkg:amd64 1.16.10 -> 1.16.12
[AKTUALISIERUNG] dpkg-dev:amd64 1.16.10 -> 1.16.12
[AKTUALISIERUNG] gnupg:amd64 1.4.12-7+deb7u1 -> 1.4.12-7+deb7u2
[AKTUALISIERUNG] gpgv:amd64 1.4.12-7+deb7u1 -> 1.4.12-7+deb7u2
[AKTUALISIERUNG] grub-common:amd64 1.99-27+deb7u1 -> 1.99-27+deb7u2
[AKTUALISIERUNG] grub-pc:amd64 1.99-27+deb7u1 -> 1.99-27+deb7u2
[AKTUALISIERUNG] grub-pc-bin:amd64 1.99-27+deb7u1 -> 1.99-27+deb7u2
[AKTUALISIERUNG] grub2-common:amd64 1.99-27+deb7u1 -> 1.99-27+deb7u2
[AKTUALISIERUNG] initscripts:amd64 2.88dsf-41 -> 2.88dsf-41+deb7u1
[AKTUALISIERUNG] kpartx:amd64 0.4.9+git0.4dfdaf2b-6 -> 0.4.9+git0.4dfdaf2b-7~deb7u1
[AKTUALISIERUNG] libapr1:amd64 1.4.6-3 -> 1.4.6-3+deb7u1
[AKTUALISIERUNG] libcurl3-gnutls:amd64 7.26.0-1+wheezy3 -> 7.26.0-1+wheezy4
[AKTUALISIERUNG] libdpkg-perl:amd64 1.16.10 -> 1.16.12
[AKTUALISIERUNG] libperl5.14:amd64 5.14.2-21 -> 5.14.2-21+deb7u1
[AKTUALISIERUNG] libsensors4:amd64 1:3.3.2-2 -> 1:3.3.2-2+deb7u1
[AKTUALISIERUNG] libwbclient0:amd64 2:3.6.6-6 -> 2:3.6.6-6+deb7u1
[AKTUALISIERUNG] libxml2:amd64 2.8.0+dfsg1-7+nmu1 -> 2.8.0+dfsg1-7+nmu2
[AKTUALISIERUNG] libxml2-utils:amd64 2.8.0+dfsg1-7+nmu1 -> 2.8.0+dfsg1-7+nmu2
[AKTUALISIERUNG] linux-libc-dev:amd64 3.2.46-1+deb7u1 -> 3.2.51-1
[AKTUALISIERUNG] multipath-tools:amd64 0.4.9+git0.4dfdaf2b-6 -> 0.4.9+git0.4dfdaf2b-7~deb7u1
[AKTUALISIERUNG] mutt:amd64 1.5.21-6.2 -> 1.5.21-6.2+deb7u1
[AKTUALISIERUNG] nmap:amd64 6.00-0.3 -> 6.00-0.3+deb7u1
[AKTUALISIERUNG] perl:amd64 5.14.2-21 -> 5.14.2-21+deb7u1
[AKTUALISIERUNG] perl-base:amd64 5.14.2-21 -> 5.14.2-21+deb7u1
[AKTUALISIERUNG] perl-modules:amd64 5.14.2-21 -> 5.14.2-21+deb7u1
[AKTUALISIERUNG] python:amd64 2.7.3-4 -> 2.7.3-4+deb7u1
[AKTUALISIERUNG] python-minimal:amd64 2.7.3-4 -> 2.7.3-4+deb7u1
[AKTUALISIERUNG] samba-common:amd64 2:3.6.6-6 -> 2:3.6.6-6+deb7u1
[AKTUALISIERUNG] smbclient:amd64 2:3.6.6-6 -> 2:3.6.6-6+deb7u1
[AKTUALISIERUNG] sysv-rc:amd64 2.88dsf-41 -> 2.88dsf-41+deb7u1
[AKTUALISIERUNG] sysvinit:amd64 2.88dsf-41 -> 2.88dsf-41+deb7u1
[AKTUALISIERUNG] sysvinit-utils:amd64 2.88dsf-41 -> 2.88dsf-41+deb7u1
[AKTUALISIERUNG] tzdata:amd64 2013c-0wheezy1 -> 2013d-0wheezy1
I thought downgrading multipath-tools would perhaps solve the problem, but I was wrong about that. The output of 'lvs -o +lv_tags' looks like this:
Code:
LV            VG      Attr      LSize  Pool Origin Data%  Move Log Copy%  Convert LV Tags   
  vm-118-disk-1 ffsan   -wi-d---- 24,03g                                            pve-vm-118
  vm-124-disk-1 ffsan   -wi-d----  8,01g                                            pve-vm-124
  vm-129-disk-1 ffsan   -wi-d---- 24,03g                                            pve-vm-129
…
Normally there is an a standing, where the d is, if a LV is marked active. I couldn't find out, what it stands for either yet..
I am using the pve-no-subscription repository. The output of pveversion -v is:
Code:
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-7
qemu-server: 3.1-5
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-13
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.0-2
Any ideas?
 
Normally there is an a standing, where the d is, if a LV is marked active. I couldn't find out, what it stands for either yet..

See 'man lvs': (d)evice present without tables

I guess you lost access to the underlying storage (SAN)?
 
Thanks for the reply.
I went on vacation, but don't want to leave the thread as is..
I think I had my lvm.conf a little misconfigured, as I got a message about duplicate entries for my san device, when doing something like "vgs".
I was still able to create lvs with
Code:
lvcreate --zero n -l 2 -n lvname ffsan /dev/sdf1:45073-45074
but not with a normal sane lvcreate-command.
As everything was in a testing environment I ended up just deleting everything, as fixing the lvm.conf didnt solve anything apart from that the warning messages about duplicate pv entries went away.
The newly created PV works like its supposed to again.
It still would be interesting to know, why it didnt work anymore, but this is not a Proxmox problem.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!