[SOLVED] Unable to delete linked Templates

Brian Read

Well-Known Member
Jan 4, 2017
123
7
58
75
I have a whole series of VM templates that have been created by updating a linked clone and then converting to a template, etc.

I have deleted all the linked clone VMs but am finding that the templates are refusing to delete as the Hard Drive files are still "linked" (presumably to earlier clones which are now templates). I tried detaching the drives, but the delete still fails.

Anyone got any ideas?
 
Hi,
even if you convert a linked clone to a template itself, its disk will still reference the original one. So you can't just remove those without breaking the linked clone/template!

What you could do is
  1. Find the templates that have a linked disk
  2. Make a full clone of these
  3. Convert those full clones to templates (intending to replace the other ones)
  4. If the templates with the linked disk don't have linked clones themselves, you can remove them
  5. If you can remove all such templates, you can remove the original template you wanted to remove
 
I think you may have misundertood what I want to do. I have this set of templates that have been created one after another from a linked clone of the "previous" one (over about a year). I want to delete all the templates (I've already deleted all the linked clone VMs). I've detached the disks from each of the templates, but it still give me an error message saying that a linked clone still exists when I try and delete them.

Your point (4) seems to imply that I ought to be able to delete them?
 
For each storage the template VMs use, please check the output of
Code:
pvesh get /nodes/<node>/storage/<storage>/content
replacing <node> and <storage> with the concrete values. You should see a parent column there, that indicates if a disk is a linked clone of another one.

If there's still problems, please share that output (adding --output-format json-pretty to the command) and also the output of pveversion -v.
 
  • Like
Reactions: djfreak
105 is the oldest in the series of templates, 106 is the next one in the series.


Code:
root@pve:~# pvesh get /nodes/pve/storage/local-zfs/content --output-format json-pretty | grep 105
"my" variable $node masks earlier declaration in same scope at /usr/share/perl5/PVE/API2/Disks/ZFS.pm line 345.
      "name" : "base-105-disk-0",
      "vmid" : 105,
      "volid" : "local-zfs:base-105-disk-0"
      "parent" : "base-105-disk-0@__base__",
      "volid" : "local-zfs:base-105-disk-0/base-106-disk-0"
root@pve:~# qm destroy 105
base volume 'local-zfs:base-105-disk-0' is still in use by linked cloned
root@pve:~# qm destroy 105 --purge
base volume 'local-zfs:base-105-disk-0' is still in use by linked cloned
root@pve:~#

Code:
   {
      "content" : "images",
      "format" : "raw",
      "name" : "base-105-disk-0",
      "parent" : null,
      "size" : 34359738368,
      "vmid" : 105,
      "volid" : "local-zfs:base-105-disk-0"
   },
   {
      "content" : "images",
      "format" : "raw",
      "name" : "base-106-disk-0",
      "parent" : "base-105-disk-0@__base__",
      "size" : 34359738368,
      "vmid" : 106,
      "volid" : "local-zfs:base-105-disk-0/base-106-disk-0"
   }

Code:
root@pve:~# pveversion -v
proxmox-ve: 7.2-1 (running kernel: 5.15.60-1-pve)
pve-manager: 7.2-11 (running version: 7.2-11/b76d3178)
pve-kernel-helper: 7.2-12
pve-kernel-5.15: 7.2-11
pve-kernel-5.15.60-1-pve: 5.15.60-1
pve-kernel-5.15.39-4-pve: 5.15.39-4
pve-kernel-5.15.39-3-pve: 5.15.39-3
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph-fuse: 15.2.16-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-3
libpve-storage-perl: 7.2-9
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.6-1
proxmox-backup-file-restore: 2.2.6-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-2
pve-container: 4.2-2
pve-docs: 7.2-2
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-6
pve-firmware: 3.5-3
pve-ha-manager: 3.4.0
pve-i18n: 2.7-2
pve-qemu-kvm: 7.0.0-3
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.5-pve1
 
105 is the oldest in the series of templates, 106 is the next one in the series.

Code:
   {
      "content" : "images",
      "format" : "raw",
      "name" : "base-105-disk-0",
      "parent" : null,
      "size" : 34359738368,
      "vmid" : 105,
      "volid" : "local-zfs:base-105-disk-0"
   },
   {
      "content" : "images",
      "format" : "raw",
      "name" : "base-106-disk-0",
      "parent" : "base-105-disk-0@__base__",
      "size" : 34359738368,
      "vmid" : 106,
      "volid" : "local-zfs:base-105-disk-0/base-106-disk-0"
   }
base-106-disk-0 is a linked clone of base-105-disk-0@__base__, as indicated by the parent property (and also encoded in the volid). Removing the parent would break the linked clone, so Proxmox VE complains.

Again, you would need to make a full clone of 106 and convert it to template again, to get something independent of 105. But keep in mind that you can't remove 106 afterwards if it also has linked clones.
 
I am trying to just remove these template entirely (all of them), it is begining to sound as though this is not possible through the commands?
 
What happens if you remove 106 first?
 
Code:
root@pve:~# qm destroy 106 --purge
base volume 'local-zfs:base-105-disk-0/base-106-disk-0' is still in use by linked cloned
root@pve:~# qm destroy 106
base volume 'local-zfs:base-105-disk-0/base-106-disk-0' is still in use by linked cloned
root@pve:~#
 
... and if I try to delete the final template, I get the same error message:
Code:
root@pve:~# qm destroy 131
base volume 'rpool1:base-117-disk-0/base-131-disk-0' is still in use by linked cloned
root@pve:~# qm destroy 131 --purge
base volume 'rpool1:base-117-disk-0/base-131-disk-0' is still in use by linked cloned
root@pve:~#
 
Please share the full output of pvesh get /nodes/pve/storage/local-zfs/content --output-format json-pretty. You need to remove all linked clones of a template first to be able to remove the template. If you have a chain of templates and linked clones it means removing the very end of that chain first (those can be non-templates too of course).
 
Please share the full output of pvesh get /nodes/pve/storage/local-zfs/content --output-format json-pretty. You need to remove all linked clones of a template first to be able to remove the template. If you have a chain of templates and linked clones it means removing the very end of that chain first (those can be non-templates too of course).
That is the VM 131 (see above) - the last in the chain - I have already deleted all the linked VMs.

The complete chain of templates has drives in two storage areas, I'll share those a little later.
 
Since the error message says that there's still a linked clone for the disk on rpool1, the output of pvesh get /nodes/pve/storage/rpool1/content --output-format json-pretty would be more relevant.
 
Code:
root@pve:~# pvesh get /nodes/pve/storage/rpool1/content --output-format json-pretty
"my" variable $node masks earlier declaration in same scope at /usr/share/perl5/PVE/API2/Disks/ZFS.pm line 345.
[
   {
      "content" : "images",
      "format" : "raw",
      "name" : "base-109-disk-0",
      "parent" : "base-107-disk-0@__base__",
      "size" : 34359738368,
      "vmid" : 109,
      "volid" : "rpool1:base-107-disk-0/base-109-disk-0"
   },
   {
      "content" : "images",
      "format" : "raw",
      "name" : "base-117-disk-0",
      "parent" : null,
      "size" : 34359738368,
      "vmid" : 117,
      "volid" : "rpool1:base-117-disk-0"
   },
   {
      "content" : "images",
      "format" : "raw",
      "name" : "base-131-disk-0",
      "parent" : "base-117-disk-0@__base__",
      "size" : 34359738368,
      "vmid" : 131,
      "volid" : "rpool1:base-117-disk-0/base-131-disk-0"
   },
   {
      "content" : "images",
      "format" : "raw",
      "name" : "base-118-disk-0",
      "parent" : null,
      "size" : 34574696448,
      "vmid" : 118,
      "volid" : "rpool1:base-118-disk-0"
   },
   {
      "content" : "images",
      "format" : "raw",
      "name" : "base-123-disk-0",
      "parent" : null,
      "size" : 34359738368,
      "vmid" : 123,
      "volid" : "rpool1:base-123-disk-0"
   },
   {
      "content" : "images",
      "format" : "raw",
      "name" : "base-126-disk-0",
      "parent" : null,
      "size" : 64424509440,
      "vmid" : 126,
      "volid" : "rpool1:base-126-disk-0"
   },
   {
      "content" : "images",
      "format" : "raw",
      "name" : "base-107-disk-0",
      "parent" : "base-131-disk-0@__base__",
      "size" : 34359738368,
      "vmid" : 107,
      "volid" : "rpool1:base-131-disk-0/base-107-disk-0"
   },
   {
      "content" : "rootdir",
      "format" : "subvol",
      "name" : "subvol-116-disk-0",
      "parent" : null,
      "size" : 6442450944,
      "vmid" : 116,
      "volid" : "rpool1:subvol-116-disk-0"
   },
   {
      "content" : "rootdir",
      "format" : "subvol",
      "name" : "subvol-119-disk-0",
      "parent" : null,
      "size" : 21474836480,
      "vmid" : 119,
      "volid" : "rpool1:subvol-119-disk-0"
   },
   {
      "content" : "rootdir",
      "format" : "subvol",
      "name" : "subvol-128-disk-0",
      "parent" : null,
      "size" : 21474836480,
      "vmid" : 128,
      "volid" : "rpool1:subvol-128-disk-0"
   },
   {
      "content" : "rootdir",
      "format" : "subvol",
      "name" : "subvol-129-disk-0",
      "parent" : null,
      "size" : 8589934592,
      "vmid" : 129,
      "volid" : "rpool1:subvol-129-disk-0"
   },
   {
      "content" : "images",
      "format" : "raw",
      "name" : "vm-100-disk-0",
      "parent" : null,
      "size" : 34574696448,
      "vmid" : 100,
      "volid" : "rpool1:vm-100-disk-0"
   },
   {
      "content" : "images",
      "format" : "raw",
      "name" : "vm-102-disk-0",
      "parent" : null,
      "size" : 1073741824000,
      "vmid" : 102,
      "volid" : "rpool1:vm-102-disk-0"
   },
   {
      "content" : "images",
      "format" : "raw",
      "name" : "vm-103-disk-0",
      "parent" : null,
      "size" : 536870912000,
      "vmid" : 103,
      "volid" : "rpool1:vm-103-disk-0"
   },
   {
      "content" : "images",
      "format" : "raw",
      "name" : "vm-104-disk-0",
      "parent" : null,
      "size" : 107660443648,
      "vmid" : 104,
      "volid" : "rpool1:vm-104-disk-0"
   },
   {
      "content" : "images",
      "format" : "raw",
      "name" : "vm-115-disk-0",
      "parent" : null,
      "size" : 34359738368,
      "vmid" : 115,
      "volid" : "rpool1:vm-115-disk-0"
   },
   {
      "content" : "images",
      "format" : "raw",
      "name" : "vm-120-disk-0",
      "parent" : null,
      "size" : 55834574848,
      "vmid" : 120,
      "volid" : "rpool1:vm-120-disk-0"
   },
   {
      "content" : "images",
      "format" : "raw",
      "name" : "vm-121-disk-0",
      "parent" : null,
      "size" : 64424509440,
      "vmid" : 121,
      "volid" : "rpool1:vm-121-disk-0"
   },
   {
      "content" : "images",
      "format" : "raw",
      "name" : "vm-122-disk-0",
      "parent" : null,
      "size" : 34359738368,
      "vmid" : 122,
      "volid" : "rpool1:vm-122-disk-0"
   },
   {
      "content" : "images",
      "format" : "raw",
      "name" : "vm-125-disk-0",
      "parent" : null,
      "size" : 26843545600,
      "vmid" : 125,
      "volid" : "rpool1:vm-125-disk-0"
   }
]
 
Code:
   {
      "content" : "images",
      "format" : "raw",
      "name" : "base-107-disk-0",
      "parent" : "base-131-disk-0@__base__",
      "size" : 34359738368,
      "vmid" : 107,
      "volid" : "rpool1:base-131-disk-0/base-107-disk-0"
   },
]
107 is a linked clone of 131
 
So it is, however:
Code:
root@pve:~# qm destroy 107 --purge
base volume 'rpool1:base-131-disk-0/base-107-disk-0' is still in use by linked cloned
root@pve:~# qm destroy 107
base volume 'rpool1:base-131-disk-0/base-107-disk-0' is still in use by linked cloned
root@pve:~#
 
You need to check who has base-107-disk-0 as a parent. The pattern should be clear now.
 
ok, I've managed to delete them all now, by following back from the latest to the earliest (the problem was that one of my serialised names was out of step).

Many thanks for your help and patience!
 
Great! Please mark the thread as SOLVED, so other users can find solutions more quickly. This can be done by editing the thread and selecting the appropriate prefix.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!