Hello,
So long story short I am trying to use a SMB network storage to host VM's from for live migration. I created the share in the /etc/fstab, mounted it successfully, and am able to read and write from the share with no problem. I am even able to create and restore backups without a problem, the thing is when I try to create a VM, or restore a VM to the storage, I get the following error:
When creating a KVM:
When restoring a KVM:
But it shows the qcow2 file when I check the storage, so the file was created and was expanded to the proper size. I verified the user account the nodes use to mount the storage have full access, and that it applies to all files and folders.
I am using this in a cluster of 2 nodes, and both have the same problem.
pveversion -v
Thanks
So long story short I am trying to use a SMB network storage to host VM's from for live migration. I created the share in the /etc/fstab, mounted it successfully, and am able to read and write from the share with no problem. I am even able to create and restore backups without a problem, the thing is when I try to create a VM, or restore a VM to the storage, I get the following error:
When creating a KVM:
Code:
[COLOR=#000000][FONT=tahoma]TASK ERROR: create failed - unable to create image: got lock timeout - aborting command[/FONT][/COLOR]
When restoring a KVM:
Code:
[COLOR=#000000][FONT=tahoma]restore vma archive: lzop -d -c /var/lib/vz/dump/vzdump-qemu-106-2013_03_21-15_13_42.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp247935.fifo - /var/tmp/vzdumptmp247935[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]CFG: size: 311 name: qemu-server.conf[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]DEV: dev_id=1 size: 17179869184 devname: drive-ide0[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]CTIME: Thu Mar 21 15:13:44 2013[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]TASK ERROR: command 'lzop -d -c /var/lib/vz/dump/vzdump-qemu-106-2013_03_21-15_13_42.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp247935.fifo - /var/tmp/vzdumptmp247935' failed: unable to create image: got lock timeout - aborting command[/FONT][/COLOR]
But it shows the qcow2 file when I check the storage, so the file was created and was expanded to the proper size. I verified the user account the nodes use to mount the storage have full access, and that it applies to all files and folders.
I am using this in a cluster of 2 nodes, and both have the same problem.
pveversion -v
Code:
root@srv-1-02:~# pveversion -vpve-manager: 2.3-13 (pve-manager/2.3/7946f1f1)
running kernel: 2.6.32-18-pve
proxmox-ve-2.6.32: 2.3-88
pve-kernel-2.6.32-18-pve: 2.6.32-88
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-18
pve-firmware: 1.0-21
libpve-common-perl: 1.0-48
libpve-access-control: 1.0-26
libpve-storage-perl: 2.3-6
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-8
ksm-control-daemon: 1.1-1
Thanks