error with cfs lock '****': unable to create image: got lock timeout - aborting command

Apr 2, 2016
2
0
6
46
Hello,

I'm attempting to add a secondary virtual HDD to attach to one of my VMs, created from a storage space mounted and attached as a directory.

If I choose for it to be a small storage space like 40GB it creates it successfully, but if I try for it to be 1TiB it fails and I get this error:

update VM 300: -scsi1 storage-2:1000,format=qcow2
TASK ERROR: error with cfs lock 'storage-storage-2': unable to create image: got lock timeout - aborting command

I did attempt to find the cli command for doing this but I was not successful at locating the command, I though that may have better results.

~Sean
 

Attachments

  • Capture1.JPG
    Capture1.JPG
    64.1 KB · Views: 160
So Update I decided to try creating the secondary drive space using .raw instead of .qcow it worked just fine doing that.. not sure about the why but.. that will do- acceptable for me..
 
Same problem when I try to create a new virtual machine with a hard drive >= 50Gb
I used to create the machine with a 32Gb disk. When the machine is created, I resize the disk to 50Gb.
 
  • Like
Reactions: raanasukumar
hi,

can't reproduce this here, can you post 'pveversion -v'?

what is the underlying storage type?

do you see anything in syslog/journal when the error appears?
 
The Storage is NFS

The syslog trace is:
Code:
Apr 30 09:38:04 prox01 pvedaemon[10020]: VM 998 creating disks failed
Apr 30 09:38:04 prox01 pvedaemon[10020]: unable to create VM 998 - error with cfs lock 'storage-NFScb1': unable to create image: got lock timeout - aborting command
Apr 30 09:38:04 prox01 pvedaemon[12903]: <root@pam> end task UPID:dvall-prox01:00002724:168A03F1:5EAA8020:qmcreate:998:root@pam: unable to create VM 998 - error with cfs lock 'storage-NFScb1': unable to create image: got lock timeout - aborting command
Apr 30 09:38:55 dvall-prox01 pvedaemon[22941]: <root@pam> successful auth for user 'root@pam'

The result of pvversion -v:

Code:
proxmox-ve: 6.1-2 (running kernel: 5.3.18-2-pve)
pve-manager: 6.1-8 (running version: 6.1-8/806edfe1)
pve-kernel-helper: 6.1-7
pve-kernel-5.3: 6.1-5
pve-kernel-5.0: 6.0-11
pve-kernel-4.15: 5.4-8
pve-kernel-5.3.18-2-pve: 5.3.18-2
pve-kernel-5.3.18-1-pve: 5.3.18-1
pve-kernel-5.3.13-3-pve: 5.3.13-3
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-4-pve: 5.0.21-9
pve-kernel-4.15.18-20-pve: 4.15.18-46
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-17
libpve-guest-common-perl: 3.0-5
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-22
pve-docs: 6.1-6
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.0-10
pve-firmware: 3.0-6
pve-ha-manager: 3.0-9
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-7
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1
 
we have a hard timeout of 60s for any operation obtaining a cluster lock, which includes volume allocation on shared storages.
 
no, it's hard-coded to ensure a blocking call releases the lock again. you can manually create the image (with qemu-img create and then rescan to reference it as unused volume in the configuration
 
Same problem when I try to import a ovf:

# qm importovf 222 Serv-XXXX-Sistemas.ovf NFScb1 --format qcow2
import failed - error during cfs-locked 'storage-NFScb1' operation: unable to create image: got lock timeout - aborting command

In raws format there is no problem

How can I do this import?

I want qcow2 format because it can be snapshoted
 
your storage is simply too slow when allocating bigger images it seems. you need to manually allocate them, for example using qemu-img create or convert.
 
Hi, I have to resurrect this topic. One of our VMs has a 5Tb raw virtual drive. When trying to move that disk to another storage, the hipervisor issues this command:

Bash:
/usr/bin/qemu-img create -o preallocation=metadata -f qcow2 /mnt/pve/SN7/images/373/vm-373-disk-0.qcow2 5368709120K

That takes more than 60 seconds through NFS. Preallocation takes its time I guess. It is connected via 10Gbps ethernet, which is more than enough in our case for production. So, we are kind of forced to use raw format if we need to move this drive. Any solution for this?

Thank you.

Code:
proxmox-ve: 6.2-2 (running kernel: 5.4.65-1-pve)
pve-manager: 6.2-12 (running version: 6.2-12/b287dd27)
pve-kernel-5.4: 6.2-7
pve-kernel-helper: 6.2-7
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-4.15: 5.4-12
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-4.15.18-24-pve: 4.15.18-52
pve-kernel-4.15.17-1-pve: 4.15.17-9
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-2
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 0.9.0-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.3-1
pve-cluster: 6.2-1
pve-container: 3.2-2
pve-docs: 6.2-6
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-1
pve-qemu-kvm: 5.1.0-2
pve-xtermjs: 4.7.0-2
qemu-server: 6.2-14
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.4-pve2
 
upgrade (first to latest 6.4, then to 7.x) - then you can configure the preallocation mode to avoid this issue.
 
Hi,
In version 7.1 I am having the following error when creating a 2Tb virtual disk:

Formatting '/vms/images/102/vm-102-disk-0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=2147483648000 lazy_refcounts=off refcount_bits=16
TASK ERROR: unable to create VM 102 - unable to create image: 'storage-VMS'-locked command timed out - aborting

What is the solution for this error? Using the web administration panel.
 
Hi !
Virtual Environment 7.2-4

unable to create VM 115 - API Return-Code: 500. Message: Could not set allow-two-primaries on resource definition vm-115-disk-2, because: 'storage-Sql1C'-locked command timed out - aborting at /usr/share/perl5/PVE/Storage/Custom/LINSTORPlugin.pm line 364.

drbd: Sql1C
<------>resourcegroup Sql1C
<------>content images,rootdir
<------>controller lc
#<----->preallocation full

Jun 30 09:52:16 sh5 pvestatd[2321]: file /etc/pve/storage.cfg line 24 (section 'Sql1C') - unable to parse value of 'preallocation': unexpected property 'preallocation'

Timeout of 60s few.
The timeout occurs regularly. Especially complicates the work when restoring from archives.
 
preallocation is only valid for directory type storages - you seem to be using Linbit's linstor plugin? I think the next step would be to find out which operation exactly takes too long - and then why ;)
 
Old thread but I have a similar issue. In my case the drbd/linstor plugin times out due to the zfs backing pool operations being very slow. This is because my zfs on spinning rust pools have lots of snapshots. It's not really linstor's fault. It'd be nice to be able to adjust the timeout for storage operations somehow.
 
Actually I found a way to work around this that is satisfactory to me. Edit /usr/share/perl5/PVE/Cluster.pm on all the cluster nodes, change alarm(60) to alarm(240) or whatever timeout you want. Then restart pvedaemon.
 
  • Like
Reactions: Juan Pablo Abuyeres
Actually I found a way to work around this that is satisfactory to me. Edit /usr/share/perl5/PVE/Cluster.pm on all the cluster nodes, change alarm(60) to alarm(240) or whatever timeout you want. Then restart pvedaemon.
that's really dangerous and possibly causes corruption of files in /etc/pve (which are protected by that lock and timeout).
 
  • Like
Reactions: ZipTX

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!