Removing Disk Images not returning space

Klas

New Member
Sep 2, 2014
11
0
1
Hello,

I've noticed that when removing disks from the webgui or with rm, those files are still assumed to be in use and are therefore still taking up space until reboot of, well, I assume every node in the cluster.
We are using NFS shares.

Is this intended?

Also, is there some nicer way to "clean up" then reboot or manually cleaning inodes?
Restarting all kvm processes should work as well, but is also quite untidy so to speak.
 
Hello Klas,

I've noticed that when removing disks from the webgui or with rm, those files are still assumed to be in use and are therefore still taking up space until reboot of, well, I assume every node in the cluster.

Not really. Note that - in order to avoid loosing data by mistake - completely removing a disk needs two steps:

1. Select the disk and "Remove" => the disks remains in the storage, but will not be connected to VM, seen as "unused" in GUI

2. Select the "Unused Disk" and "Remove" => warning

Are you sure you want to remove entry 'Unused Disk 0' This will permanently erase all image data.

will be prompted, by confirming the virtual disk file will be removed from storage, free space will increase by amount of previously from virtual used space (usually it is less than the virtual disk size!)

Kind regards

Mr.Holmes
 
Sorry, not relevant answer.
Removing the disks from usage is step one, but removing them all-together is what I am speaking of.

The problem is based on the classical inode issue that when you remove a file and something is still accessing it then it still retains the data.

So, if I remove an unused disk, that space isn't reclaimed since the nfs machine considers it to still be in use.
The lost disk space can only be regained by manually removing the inode connections on the nfs share or be rebooting it.

To go around this, you can overwrite the vm-file before removal of the unused disk with nothing at all (echo > vmid...) and then removing it, either through web gui or through rm.

Basically, the nicest thing would be if it first wrote something to the file and then removed it.
This would remove the inode issue and actually remove the data without any issues.
 
Seems to be a problem in your file system - Proxmox does not attach any more the released virtual disk after removing it from "Unused" state.

I tried it out with an NFS storage which is formatted as ext4 - both "df -h" in NFS fileserver and "Summary" in Proxmox increases the free space by the amount which was occupied by the released virtual disk.
 
Hmm, strange.
I've tried it with three separate NFS storage nodes.
And the behavious is seen on all of them.

Oh well.
 
Hello Klas

After thinking about it I think I know what happened:

In a machine without hotplug you removed the disk but did not stop (really switch off, not just a reboot) the machine.

I never would try to remove a disk from a running machine - anyway, if you want to do it set "Hotplug" to "Yes" in machines´Option tab.

Kind regards

Mr.Holmes
 
Actually, no.
The first time I noticed the behaviour was when I moved disks to a different storage and removed the old files though.

But I replicated the error later on without that.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!