qcow2 + snapshots over NFS

tlaramie

New Member
Nov 12, 2025
2
0
1
I'm using shared storage backed by NFS with VMs using qcow2 and I'm hitting wall trying to track the size of a given snapshot. Using qemu-img I can see the snapshot but the VM_SIZE all show as 0B. I can definitely see the size of the disk growing from my tests ( writing mutliple 1GB files from /dev/urandom). Am I missing something or is just the way it is with qcow2 + NFS?
As a bonus question, once a snapshot is deleted I don't see the space released. Example being in my above tests, after creating a snapshot, writing 10GB in files then rolling back, the vm disk size remains the same.
 
Hi,
VM_SIZE is the size of the VM state/RAM included in the snapshot. In Proxmox VE, the state/RAM is saved to a dedicated volume, so that value will always be 0B in the qcow2-inernal snapshot. I don't know a command to query the exact size for a qcow2 snapshot, but maybe somebody else does. It's not a cheap operation, as all clusters in the image would need to be iterated AFAIK and it's not a static value, because it will grow the more new data you write after taking the snapshot.

There's also the snapshot-as-volume-chain technology preview feature, where each snapshot has a separate volume associated with it. There, one can more easily check the space usage. But one needs to look at the image on top of the snapshot in the chain, which contains the delta. And again, it will grow the more new data is written afterwards.

Regarding the bonus question: QEMU should be smart enough to re-use the space next time it's needed. If the image does grow too large, see: https://pve.proxmox.com/wiki/Shrink_Qcow2_Disk_Files
 
@fiona Thanks for the reply. I ran some more tests and it looks like at current until the qcow2 disk is either moved between storage backends the space isn't released.
Environment:
- Running on Proxmox VE 9.1
- VM with a 100G thin provisioned qcow2 with discard=on
- 2 NFS mounts connected over NFS 4.2 backed by NetApp.
- When the snapshot was taken, the qcow2 was 16GB

Test Method:
- created a TON of 10Gb files of using /dev/urandom , deleting them then re-creating them until the qcow2 file had grown to 85GB.

Initial State
- power down VM, rollback snapshot, delete snapshot, power on VM.
- qcow2 disk is still 85GB

Method 1: fstrim -av as root from the guest os.
Outcome: no change.

Method 2: storage migrate
Outcome: qcow2 shrank to 16GB

Method 3: qemu-img conversion
Outcome: qcow2 shrank to 16GB

The concerns I have is that over space could get consumed untracked. Also there may be performance implications on both the VM and the backend storage when running on top of snapshots.