Problem deleting lxc templates Proxmox VE Ceph

Johan Blaauw

Member
Sep 1, 2018
3
0
6
56
Hi,

I have created a template of a lxc container and created subsequently a container of this template.
After changing some settings in the container I wanted to create a new template of this container and delete the first template after creating the new template. I tried to delete the first template and got the response that there are still linked images to the template. So the simplest thing to do was to delete the second template first and then delete the first template.

When I try to delete the first template I get the following error:

TASK ERROR: error with cfs lock 'storage-CT_Storage': base volume 'base-104-disk-1' is still in use by linked clones

When I try to delete the second template I get the following error:

TASK ERROR: error with cfs lock 'storage-CT_Storage': rbd error: error setting snapshot context: (2) No such file or directory

I've googled several possible solutions but none worked and resulted in the error mentioned above.
 
Do a full clone of the template, not a linked one (default).
 
Hi Alwin,
Thanks for your reply.

There are no clones of the templates there are only two templates.
How can I delete the two templates?
 
TASK ERROR: error with cfs lock 'storage-CT_Storage': base volume 'base-104-disk-1' is still in use by linked clones
Once there is no linked CT with the template, a removal should be possible (more button).

TASK ERROR: error with cfs lock 'storage-CT_Storage': rbd error: error setting snapshot context: (2) No such file or directory
Does the rbd image to that template still exist?

When the images to the templates don't exist and a removal through the GUI/CLI is not possible, then you can remove the vmid.conf under /etc/pve/lxc/.
 
Once there is no linked CT with the template, a removal should be possible (more button).


Does the rbd image to that template still exist?

When the images to the templates don't exist and a removal through the GUI/CLI is not possible, then you can remove the vmid.conf under /etc/pve/lxc/.

We are having the same issue when trying to delete container templates:

On first attempt to delete (after all linked clones are removed):

Code:
2019-01-10 15:48:27.548015 7f56b1375100 -1 did not load config file, using default settings.
2019-01-10 15:48:27.653172 7f770032c100 -1 did not load config file, using default settings.
2019-01-10 15:48:27.756429 7f138504a100 -1 did not load config file, using default settings.
Removing all snapshots: 100% complete...
Removing all snapshots: 100% complete...done.
2019-01-10 15:48:28.078412 7f741fbdb100 -1 did not load config file, using default settings.
image has watchers - not removing
Removing image: 0% complete...failed.
rbd: error: image still has watchers
TASK ERROR: error with cfs lock 'storage-ceph_storage': rbd rm 'base-128-disk-2' error: rbd: error: image still has watchers

When checking RBD in the Ceph cluster, the PVE node is the only listed watcher. Does it not recognize itself?

Then, on subsequent attempts to delete the template:

Code:
TASK ERROR: error with cfs lock 'storage-ceph_storage': rbd error: error setting snapshot context: (2) No such file or directory

The RBD image still exists in Ceph, and the Proxmox node where the template was originally created is listed as the only watcher:

Code:
root@x:~# rbd status vm/base-117-disk-1
Watchers:
        watcher=[pve-node]:0/3255163978 client.8081844 cookie=x

How can I have the Proxmox node cleanly unmount/unwatch the RBD image? I don't want to remove the files in "/etc/pve/local/lxc" until the actual backing RBD images are deleted.
 
Last edited:
How can I have the Proxmox node cleanly unmount/unwatch the RBD image? I don't want to remove the files in "/etc/pve/local/lxc" until the actual backing RBD images are deleted.
Check on the node that is watching if it is mounted or still mapped. If it doesn't help, as a last resort, reboot the node.
 
I believe this may be inching close to the core issue.

VM 117 was an LXC container that was converted to a template through the Proxmox GUI. This causes the disk image to be renamed (or cloned, perhaps?)

The container template lists the following rootfs in its configuration:

Code:
root@p:~# pvesh get nodes/p/lxc/117/config | grep 117
200 OK
   "rootfs" : "ceph_storage:base-117-disk-1,size=8G",

However, the Proxmox node has the disk mapped by another name:

Code:
root@p:~# rbd showmapped | grep 117
5  vm   vm-117-disk-1   -    /dev/rbd5

In Ceph, "vm-117-disk-1" is not a valid name. It is actually named "base-117-disk-1" after the conversion to a template:

Code:
root@c:~# rbd list vm | grep 117
base-117-disk-1

My suspicion is that, when converting to a template, Proxmox renames/clones the RBD image on the Ceph storage, but somehow fails to update the local RBD mappings? Then, when attempting to delete the template, it then attempts to unmap "base-117-disk-1", which does not exist as a mapping locally?

Unmapping "vm-117-disk-1" on the Proxmox node, then removing the RBD image in Ceph seems to work:

Code:
root@p:~# rbd unmap vm/vm-117-disk-1
Code:
root@c:~# rbd rm vm/base-117-disk-1
Removing image: 100% complete...done.

Is it now safe to simply delete "/etc/pve/lxc/117.conf"?
 
Last edited:
VM 117 was an LXC container that was converted to a template through the Proxmox GUI. This causes the disk image to be renamed (or cloned, perhaps?)
They are renamed when converted into a template.

My suspicion is that, when converting to a template, Proxmox renames/clones the RBD image on the Ceph storage, but somehow fails to update the local RBD mappings? Then, when attempting to delete the template, it then attempts to unmap "base-117-disk-1", which does not exist as a mapping locally?
I will try to reproduce this on my system.

Is it now safe to simply delete "/etc/pve/lxc/117.conf"?
If there are no linked clones referencing the template then yes. And if there are then they miss their base image.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!