failed to create VM on drbd volume

mamouni

New Member
Sep 23, 2011
19
0
1
Hi

In a new drbd installation I'm unable to create VM, i got this message :

/usr/sbin/qm create 108 --cdrom cdrom --name tezst --vlan0 rtl8139=32:FE:3D:33:07:7C --bootdisk ide0 --ostype other --ide0 hotshare:10,format=raw --memory 512 --onboot no --sockets 1
device-mapper: reload ioctl failed: Invalid argument
Aborting. Failed to activate new LV to wipe the start of it.
create failed - command '/sbin/lvcreate --addtag pve-vm-108 --size 10485760k --name vm-108-disk-1 drbdvg' failed with exit code 5
unable to apply VM settings -

I google found that i have to use kpartx but it don't resolve my problem.
Can you help me please ?
whate have I to do exactly ?

thanks.
 
Hi,

I had the same problem some days ago try to change the "filter" line in /etc/lvm/lvm.conf to the following:

Code:
filter = ["a|sd.*|", "a|drbd.*|", "r|.*|"]

That should help, otherwise provide the output of
Code:
pvescan
please.

Hope that will help!
 
thanks for the answer, now I can curently create VM but enable to migrate them, perhaps I have miss something to do on node 2

/usr/sbin/qmigrate --online 172.16.19.23 110
Oct 02 21:22:27 starting migration of VM 110 to host '172.16.19.23'
Oct 02 21:22:27 copying disk images
Oct 02 21:22:27 starting VM on remote host '172.16.19.23'
device-mapper: reload ioctl failed: Invalid argument
device-mapper: reload ioctl failed: Invalid argument
device-mapper: reload ioctl failed: Invalid argument
device-mapper: reload ioctl failed: Invalid argument
device-mapper: reload ioctl failed: Invalid argument
command '/sbin/lvchange -aly /dev/drbdvg/vm-110-disk-2' failed with exit code 5
volume 'hotshare:vm-110-disk-2' does not exist
Oct 02 21:22:28 online migrate failure - command '/usr/bin/ssh -c blowfish -o BatchMode=yes root@172.16.19.23 /usr/sbin/qm --skiplock start 110 --incoming tcp' failed with exit code 2
Oct 02 21:22:28 migration finished with problems (duration 00:00:02)
VM 110 migration failed -
 
Just i notice the error with rsync migration on the same node, no message on the master.


/usr/bin/ssh -t -t -n -o BatchMode=yes 172.16.19.23 /usr/sbin/qmigrate 172.16.19.18 102
Oct 02 23:47:23 starting migration of VM 102 to host '172.16.19.18'
Oct 02 23:47:23 copying disk images
device-mapper: reload ioctl failed: Invalid argument
device-mapper: reload ioctl failed: Invalid argument
device-mapper: reload ioctl failed: Invalid argument
device-mapper: reload ioctl failed: Invalid argument
device-mapper: reload ioctl failed: Invalid argument
vm-102-disk-1.raw

rsync status: 15728640000 100% 46.72MB/s 0:05:21 (xfer#1, to-check=0/1)

sent 15730560079 bytes received 31 bytes 48928647.31 bytes/sec
total size is 15728640000 speedup is 1.00
Oct 02 23:52:47 migration finished successfuly (duration 00:05:24)
Connection to 172.16.19.23 closed.
VM 102 migration done

proxmox1:~# pveversion -v
pve-manager: 1.9-24 (pve-manager/1.9/6542)
running kernel: 2.6.32-6-pve
pve-kernel-2.6.32-6-pve: 2.6.32-47
qemu-server: 1.1-32
pve-firmware: 1.0-14
libpve-storage-perl: 1.0-19
vncterm: 0.9-2
vzctl: 3.0.29-2pve1
vzdump: not correctly installed
vzprocps: 2.0.11-2
vzquota: 3.0.11-1

proxmox2:~# pveversion -v
pve-manager: 1.9-24 (pve-manager/1.9/6542)
running kernel: 2.6.35-2-pve
proxmox-ve-2.6.35: 1.8-13
pve-kernel-2.6.32-4-pve: 2.6.32-33
pve-kernel-2.6.35-2-pve: 2.6.35-13
qemu-server: 1.1-32
pve-firmware: 1.0-14
libpve-storage-perl: 1.0-19
vncterm: 0.9-2
vzctl: 3.0.29-2pve1
vzdump: 1.2-16
vzprocps: 2.0.11-2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.15.0-1
ksm-control-daemon: 1.0-6
 
vzdump: not correctly installed

All is ok now
aptitude install vzdump
aptitude update
aptitude upgrade

/usr/sbin/qmigrate --online 172.16.19.23 111
Oct 03 00:37:49 starting migration of VM 111 to host '172.16.19.23'
Oct 03 00:37:49 copying disk images
Oct 03 00:37:49 starting VM on remote host '172.16.19.23'
Oct 03 00:37:50 starting migration tunnel
Oct 03 00:37:51 starting online/live migration
Oct 03 00:37:53 migration status: active (transferred 65876KB, remaining 472348KB), total 540992KB)
Oct 03 00:37:55 migration status: active (transferred 136676KB, remaining 400776KB), total 540992KB)
Oct 03 00:37:57 migration status: active (transferred 206644KB, remaining 330708KB), total 540992KB)
Oct 03 00:37:59 migration status: active (transferred 275373KB, remaining 261768KB), total 540992KB)
Oct 03 00:38:01 migration status: active (transferred 345725KB, remaining 191104KB), total 540992KB)
Oct 03 00:38:03 migration status: active (transferred 415917KB, remaining 120324KB), total 540992KB)
Oct 03 00:38:05 migration status: active (transferred 484769KB, remaining 50612KB), total 540992KB)
Oct 03 00:38:07 migration status: completed
Oct 03 00:38:07 migration speed: 32.00 MB/s
Oct 03 00:38:08 migration finished successfuly (duration 00:00:19)
VM 111 migration done

thanks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!