Storage space issues after moving my virtual machine disks

mmolmar

New Member
Jan 15, 2025
5
0
1
I moved my virtual machine disks from one volume to another on the same Proxmox server. After moving them, I deleted the original disks, which were listed as unused, but the storage space occupied during the migration process hasn't been freed up. What could be the cause? I've also checked for orphaned disks. Does anyone know what could be happening?
 
I'm guessing you moved from ZFS or LVM-thin to Ceph? If so, inside of the guests run `fstrim` as root (the `discard` option on the virtual drive will need to be enabled).

You'll also want to install the qemu-guest-agent. Are you familiar with that process?

If that doesn't help let me know more details about your setup. If it is right and you'd like an explainer, I'll give more detail as to why that happened that way.
 
Last edited:
The machines being moved were linked clones of a template, but when running them with lvs, the template they're linked to doesn't appear in the source column. Could it be that, when the disk is moved, they stopped being linked clones and became full clones, now taking up more space?
1743185499374.png
I have moved all the machines except the first one

Many thanks to didier199x and coolaj86 for responding.
 
Last edited:
Yes, that's related to the case I was mentioning - moving them to a different volume will delink them, and can also make them fat.

HOWEVER, in that case, having discard enabled and running `fstrim` will thin them out again. Enabling the qemu-guest-agent can be helpful for periodically requesting fstrim from the Host.

You can't relink them, but unless your base image is quite large, or the changes you make from the base image are very small (which is the ideal case for a linked clone, for sure), the discard and trim operations will save you significant space.

If `discard` + `fstrim` doesn't make a difference, and you really need the linked clone savings, you could do some trickery moving the template, creating a new clone, manually mounting volumes, and using `rsync -avhP /mnt/{old-vm-disk}/ /mnt/{new-vm-disk}/` (or doing so via ssh with both VMs booted).