Storage Thin Provisioning

zhoid

Member
Sep 4, 2019
24
0
21
43
Hi,

We have a Proxmox cluster setup connected to a SAN.

We have chosen to use the qcow2 disk format for our VM workloads to suppose thin provisioning.


I have 48 Linux VM's , 46 VM's exactly the same OS and running the same application, using +- 6GB including OS. 1 x 150Gb Linux VM using +- 20Gb including OS and 1 x 50Gb VM using +- 10Gb space including OS

When looking at the storage summary on my NFS mount I am using 1.08Tb which makes sense taken into consideration 20Gb * 46 + 150Gb + 20Gb but I know I am actually using much less than that.

Howcome the full allocated disk space is being used/allocated ?

Thanks

Zaid
 
Maybe your SANs filesystem doesn't support hole-punching needed for qcow2 and thin provisioning?
 
Maybe your SANs filesystem doesn't support hole-punching needed for qcow2 and thin provisioning?
The SAN does support thin prov.

Some qcow2 files show


root@pve-215:/mnt/pve/nfsproxmox_customers02/images/943# qemu-img info vm-943-disk-0.qcow2
image: vm-943-disk-0.qcow2
file format: qcow2
virtual size: 20 GiB (21474836480 bytes)
disk size: 20 GiB

cluster_size: 65536
Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
refcount bits: 16
corrupt: false
extended l2: false

Different mount point , different VM

root@pve-215:/mnt/pve/pvs-fs-06/images/306# qemu-img info vm-306-disk-0.qcow2
image: vm-306-disk-0.qcow2
file format: qcow2
virtual size: 50 GiB (53687091200 bytes)
disk size: 2.06 GiB
cluster_size: 65536

Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
refcount bits: 16
corrupt: false
extended l2: false

I even migrated VMID 943 to pvs-fs-06 disk size still shows 20Gb even though it's only using 6Gb.

Tried both NFSv3 and NFSv4 does not make a different, basically some VM's disk space are being thin provisioned and others are not.
 
Did you try to run a fstrim -a inside VM 943 to force freeing up space by doing a manual trim?