Hello,
I have multiple VMs running inside of my cluster which are link-cloned from Ceph templates.
If I destroy (via API) several VMs around the same time I will all but the first operation will fail with "Error: unexpected status", with the log showing:
I also get this if I try and delete several templates at the same time (usually these are also link-cloned from a parent template, but not sure if this is related).
Is there a way to make this work? Obviously I can build retry logic into my app, but that's a bit of a hack...
pve-manager/3.3-1/a06c9f73 (running kernel: 3.16.2) - cluster (3 nodes with Ceph)
Thanks!
George
I have multiple VMs running inside of my cluster which are link-cloned from Ceph templates.
If I destroy (via API) several VMs around the same time I will all but the first operation will fail with "Error: unexpected status", with the log showing:
Code:
trying to aquire lock... OK
trying to aquire cfs lock 'storage-ceph' ...TASK ERROR: got lock request timeout
I also get this if I try and delete several templates at the same time (usually these are also link-cloned from a parent template, but not sure if this is related).
Is there a way to make this work? Obviously I can build retry logic into my app, but that's a bit of a hack...
pve-manager/3.3-1/a06c9f73 (running kernel: 3.16.2) - cluster (3 nodes with Ceph)
Thanks!
George
Last edited: