When does "Destroy unreferenced disks" actually make a difference?

choidazzi

New Member
Mar 16, 2026
2
0
1
1773668145803.png

Hi,

I'm trying to understand the behavior of the "Destroy unreferenced disks owned by this guest" option when deleting a VM.

After looking into the source code (QemuServer.pm), I found that disks listed as unused0, unused1, etc. in the VM config are always deleted regardless of whether this checkbox is checked or not, since foreach_volume_full is called with include_unused => 1.

From what I understand, the checkbox only matters for disks that exist on storage but are NOT referenced in the VM config at all (e.g., created via pvesm alloc manually).

My questions are:
1. What are the realistic scenarios where VM disks remain on storage as orphaned/unreferenced disks after a VM deletion?
2. Is there any way to intentionally leave VM disks on storage after deleting a VM through normal operations (without manually editing the config file or using pvesm alloc)?
3. Are there known cases such as failed migrations, failed clones, or storage errors that can cause this?

Thanks in advance!
 
Hi @choidazzi , welcome to the forum.

1. What are the realistic scenarios where VM disks remain on storage as orphaned/unreferenced disks after a VM deletion?
One possibility is that someone used qm disk unlink (see man qm). This command can detach a disk from the VM while leaving the underlying volume on storage.

There are likely other situations as well, simply because users occasionally come to the forum asking how to clean up such disks. However, how they ended up in that state often remains unknown. Neither the user nor forum members usually find it worthwhile to go back through logs from days, weeks, or even months earlier to reconstruct the exact sequence of events.

Is there any way to intentionally leave VM disks on storage after deleting a VM through normal operations (without manually editing the config file or using pvesm alloc)?
Yes, using qm disk unlink.
Are there known cases such as failed migrations, failed clones, or storage errors that can cause this?
There have been anecdotal reports from users suggesting that such situations can occur. Ideally, they should not happen, but confirming the cause requires detailed documentation and a proper bug report. I do not know whether those cases were ever formally reported or investigated.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: pulipulichen