I've been struggling to get discard/trim to work on my VMs and I'm all out of ideas so I'm looking for some help.
Basic facts about my setup:
- Synology NAS sharing a volume which is mounted as NFS storage in Proxmox
- A VM with a 100GB qcow2 disk which resides on this NFS volume. The disk is only approximately 50% used, but the qcow2 file still consumes the full 100GB
- The VM's SCSI controller is set to "VirtIO SCSI single"
- The qcow2 disk has the "discard" option checked
According to the wiki, when the guest runs fstrim, it should free up the unused space on the backing storage. This doesn't seem to work for me, however. When I run fstrim in the guest, it reports that it trimmed the expected amount of space, but the qcow2 file is still 100GB.
I did some searching and found that Trim/Deallocate commands are only supported on NFS 4.2. Synology's GUI only lets you configure up to NFS 4.1, even though the underlying kernel and nfsd appear to support 4.2. I manually edited the start script to enable 4.2, and validated on PVE that it reflects the share as being mounted with nfs v4.2 now, but there's still no change in behavior.
Is what I'm doing even possible or do I have to do block storage like iSCSI to have proper thin provisioning?
Basic facts about my setup:
- Synology NAS sharing a volume which is mounted as NFS storage in Proxmox
- A VM with a 100GB qcow2 disk which resides on this NFS volume. The disk is only approximately 50% used, but the qcow2 file still consumes the full 100GB
- The VM's SCSI controller is set to "VirtIO SCSI single"
- The qcow2 disk has the "discard" option checked
According to the wiki, when the guest runs fstrim, it should free up the unused space on the backing storage. This doesn't seem to work for me, however. When I run fstrim in the guest, it reports that it trimmed the expected amount of space, but the qcow2 file is still 100GB.
I did some searching and found that Trim/Deallocate commands are only supported on NFS 4.2. Synology's GUI only lets you configure up to NFS 4.1, even though the underlying kernel and nfsd appear to support 4.2. I manually edited the start script to enable 4.2, and validated on PVE that it reflects the share as being mounted with nfs v4.2 now, but there's still no change in behavior.
Is what I'm doing even possible or do I have to do block storage like iSCSI to have proper thin provisioning?