[SOLVED] Removing unattached VM Disks

promoxer

Member
Apr 21, 2023
207
20
18
This one feels like a bit of a bug to me.

1. I have 2 ZFS pools, rpool and vpool
2. When I restore a VM, PVE likes to restore to rpool by default, so a VM disk gets created, e.g. vm-100-disk-0 under rpool's VM Disks
3. So I stop it and restore again to vpool, however the disk in rpool still exists
4. Now I have several disks in rpool's VM Disks, but they are not in use nor attached, yet I can't remove them
5. Clicking on remove gives Cannot remove image, a guest with VMID '100' exists! You can delete the image from the guest's hardware pane
6. Any idea how I can remove them?
 
This one feels like a bit of a bug to me.

1. I have 2 ZFS pools, rpool and vpool
2. When I restore a VM, PVE likes to restore to rpool by default, so a VM disk gets created, e.g. vm-100-disk-0 under rpool's VM Disks
3. So I stop it and restore again to vpool, however the disk in rpool still exists
4. Now I have several disks in rpool's VM Disks, but they are not in use nor attached, yet I can't remove them
5. Clicking on remove gives Cannot remove image, a guest with VMID '100' exists! You can delete the image from the guest's hardware pane
6. Any idea how I can remove them?
Hi,
do you see the disks as unattached disks in the VMs Hardware Tab? If not, try to run a pvesm scan zfs.
 
No, the disks are not attached, I ran a pvesm scan zfs, still did not appear in the VMs.
If you are sure the disks are unused and contain no valuable data, you can remove them via a zfs destroy <pool-name>/<disk-name>
 
i see.. let me try thanks.. but they should never get orphaned, right?

I only have a few VMs, so it was easy to spot, otherwise it could be quite alot of storage lost
 
Last edited:
I have also encountered thisissue, will it be resolved as I don't think this ZFS object should remain assoicated with it's parent VM once it has been detached?
 
I have also encountered thisissue, will it be resolved as I don't think this ZFS object should remain assoicated with it's parent VM once it has been detached?
What would you expect for the detached disk to happen? Note that it makes sense to keep it associated to the VMID, as some actions require to detach the disk (e.g. changing the bus). If the disk was detached, it can also be removed if not required anymore.

There is already the possibility to reassign ownership of a disk (attached and also detached) to another VM via the Disk Actions, if that is required and it is also possible to remove unused disks associated to a VM when destroying the VM, just check the Destroy unreferenced disks owned by guest flag in the dialog.
 
I would expect a detached disk to be available for deletion, it currently isn't. I don't expect to have to delete a VM to be able to delete a disk that I have already detached from it.

Regards,

John
 
I would expect a detached disk to be available for deletion, it currently isn't. I don't expect to have to delete a VM to be able to delete a disk that I have already detached from it.

Regards,

John

As already stated above:
If the disk was detached, it can also be removed if not required anymore.
It is possible to remove an unattached disk. You will have to select the detached disk in the VMs Hardware tab and click on Remove.
 
Hi,
I would do that except as soon as you detach the disk in the VM's hardware tab it is no longer visible in the VM's hardware tab.....
if that happens, it could be that the disk was already deleted by something/someone else or could not be found on the storage for some other reason. Please check the task and system log around the time this happened.
 
Hi,

if that happens, it could be that the disk was already deleted by something/someone else or could not be found on the storage for some other reason. Please check the task and system log around the time this happened.
Hi Fiona,

It is 100% not the case that anything or anyone else has interfered, I am the only user and all the other hosted VM's are shutdown.

If I create a new virtual HD and then detach it the entry for the virtual HD immediately disappears from the hardware tab of the VM. If I then locate the detached virtual disk and try to delete it through the GUI I am unable to do so because the interface reports that it is still associated with a VM. The only way I can delete the unattached disk is using zfs destroy.

I have monitored the task and system logs as I have created and then detached and tried to delete the detached HD and there is nothing untoward in either.

If you could explain the correct process for removing a virtual disk through the GUI that would be most appreciated.

Best regards,

John
 
Last edited:
Hi Fiona,

It is 100% not the case that anything or anyone else has interfered, I am the only user and all the other hosted VM's are shutdown.

If I create a new virtual HD and then detach it the entry for the virtual HD immediately disappears from the hardware tab of the VM. If I then locate the detached virtual disk and try to delete it through the GUI I am unable to do so because the interface reports that it is still associated with a VM. The only way I can delete the unattached disk is using zfs destroy.

I have monitored the task and system logs as I have created and then detached and tried to delete the detached HD and there is nothing untoward in either.

If you could explain the correct process for removing a virtual disk through the GUI that would be most appreciated.

Best regards,

John
Well, this is not the expected behavior and something is off. Can you please post the storage configuration cat /etc/pve/storage.cfg as well as the VM config qm config <VMID> --current. Before and after you detach the disk. Also please post the output of zfs list from before and after detaching the disk (if the disk is indeed located on the zpool.

Also, what is your Proxmox VE version? pveversion -v
 
  • Like
Reactions: fiona
Hi chris,

I've worked it out, I was being a bit stupid!

Cheers,

John
Thanks for your feedback, glad you could figure out the problem! May I ask you to share what the issue was in the end and how you solved it (you got me curious)? Also, it might help others to find a solution, thx!
 
Thanks for your feedback, glad you could figure out the problem! May I ask you to share what the issue was in the end and how you solved it (you got me curious)? Also, it might help others to find a solution, thx!
Hi Chris,

Of course, when I clicked the detach button I just didn't notice that the disk reappeared in the GUI at the bottom of the list of configured hardware rather than just staying where it was but it's status changing,

Cheers,

John
 
I have the same problem. I have disks in the VM that are not visible in the Hardware tab, but they are in a given storage and take up space. They cannot be deleted from storage (there is an error message). And such disks are not visible from the VM level.
Deleting via SSH is not fun and convenient.
 
I have the same problem. I have disks in the VM that are not visible in the Hardware tab, but they are in a given storage and take up space. They cannot be deleted from storage (there is an error message). And such disks are not visible from the VM level.
Deleting via SSH is not fun and convenient.
You might want to try and rescan the storages so found disks get reattached as unused disks via qm disk rescan --vmid <VMID>. See man qm for details.

How did you end up without the disks to begin with? Where these leftover from an old, deleted VM?
 
You might want to try and rescan the storages so found disks get reattached as unused disks via qm disk rescan --vmid <VMID>. See man qm for details.

How did you end up without the disks to begin with? Where these leftover from an old, deleted VM?
Problem is after problematic replication or migration (not finished because no free space on target disk).
Then on target node we have broken disks -0.

We clean space on target node. Then HA migration finish with success, but with disk-1.

And we have VM with two disks -0 and -1, but -0 is not visible in the Hardware tab.
Rescan does not help.
 
  • Like
Reactions: twhidden
Problem is after problematic replication or migration (not finished because no free space on target disk).
Then on target node we have broken disks -0.

We clean space on target node. Then HA migration finish with success, but with disk-1.

And we have VM with two disks -0 and -1, but -0 is not visible in the Hardware tab.
Rescan does not help.
Similar issue here. Was doing some practice migrations. Aborted the process a couple of times (testing some 10 gig adapters, zfs stuff, etc). I now have two orphaned images `disk-0` and `disk-1`. Just interested to see why it happened in the first place.

1730756415262.png

When I migrate back to this server, it assigns disk-2, and I see the other disks, but won't let me delete them due to them being assigned to a VM.

The above screen shot, the VM does not exist on this server ATM.

zfs destroy zfs-nvme/vm-223-disk-0
zfs destroy zfs-nvme/vm-223-disk-1

That does clear it off, but I guess an aborted or failed process should have done that for you. Maybe a bug report.

Couple possible causes:

Moving from old proxmox LVM storage to new proxmox server with ZFS. Aborted that twice while testing transfer rates. Seems to line up with how many orphaned images I had.

Proxmox 8.2.2
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!