Thin provision qcow2 image

Norberto Iannicelli

Renowned Member
May 9, 2016
54
0
71
38
good morning everyone. I have a question, maybe someone can help me.
My qcow2 images are not working on thin-provion, the images on NAS-NFS are being counted on the total size of the disk and not the actual size used by the VM, so I'm always having to have more space even than the clients' vm do not use the total space.
Does anyone know how I can adjust this to only tell what is in use in vm? see an example:

root@node01:/mnt/pve/stor01/images/100# ls -alh vm-100-disk-0.qcow2
-rw-r-----+ 1 root root 41G jun 28 08:56 vm-100-disk-0.qcow2
root@node01:/mnt/pve/stor01/images/100# du -h vm-100-disk-0.qcow2
40G vm-100-disk-0.qcow2


But this vm use only this:
[root@vps ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_vps-lv_root
35G 4,7G 29G 15% /
tmpfs 939M 0 939M 0% /dev/shm
/dev/sda1 477M 127M 325M 29% /boot
 
root@node01:/mnt/pve/stor01/images/100# ls -alh vm-100-disk-0.qcow2
-rw-r-----+ 1 root root 41G jun 28 08:56 vm-100-disk-0.qcow2
root@node01:/mnt/pve/stor01/images/100# du -h vm-100-disk-0.qcow2
40G vm-100-disk-0.qcow2

What reports
Code:
qemu-img info vm-100-disk-0.qcow2
 
Thanks for the reply, Richardo.
Code:
root@node01:/mnt/pve/stor01/images/100# qemu-img info vm-100-disk-0.qcow2
image: vm-100-disk-0.qcow2
file format: qcow2
virtual size: 40G (42949672960 bytes)
disk size: 5.0G
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false
root@node01:/mnt/pve/stor01/images/100#

In this image i reset zeroes.
But his image not, see:
Code:
root@node01:/mnt/pve/stor01/images/104# qemu-img info vm-104-disk-0.qcow2
image: vm-104-disk-0.qcow2
file format: qcow2
virtual size: 25G (26843545600 bytes)
disk size: 25G
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false
root@node01:/mnt/pve/stor01/images/104# du -h vm-104-disk-0.qcow2
25G    vm-104-disk-0.qcow2
root@node01:/mnt/pve/stor01/images/104# ls -alh vm-104-disk-0.qcow2
-rw-r-----+ 1 root root 26G jul  1 17:03 vm-104-disk-0.qcow2
 
Thanks you very much, Richard.
is there any way to do this without downtime?

I tested the fstrim, it does not seem to work very well.
 
Thanks you very much, Richard.
is there any way to do this without downtime?

I tested the fstrim, it does not seem to work very well.
I believe you have issues with TRIM/Discard because you are working with qcow files on NFS.
I'm still not sure how to tell Proxmox to thin provision over NFS correctly.
p.s. if you will move (for a test) your virtual servers storage to local storage (qcow or Zvol) and if its using scsi/virtio scsi the "fstrim -av" will work for you.

So @Richard, how can one Thin provision (qcow2) over NFS? Because when you create disk on NFS backed(in proxmox VM > create disk) it will be "full size" from the start.
 
So @Richard, how can one Thin provision (qcow2) over NFS? Because when you create disk on NFS backed(in proxmox VM > create disk) it will be "full size" from the start.
No, it's not, the file is sparsely created as you can see here:

Code:
root@proxmox6 /mnt/pve/nfs/images/100 > ls -lh
total 2.2M
-rwxr-xr-x 1 root root 33G Aug  2 13:46 vm-100-disk-0.qcow2

root@proxmox6 /mnt/pve/nfs/images/100 > du vm-100-disk-0.qcow2
2228    vm-100-disk-0.qcow2

root@proxmox6 /mnt/pve/nfs/images/100 > qemu-img info vm-100-disk-0.qcow2
image: vm-100-disk-0.qcow2
file format: qcow2
virtual size: 32 GiB (34359738368 bytes)
disk size: 2.18 MiB
cluster_size: 65536
Format specific information:
    compat: 1.1
    compression type: zlib
    lazy refcounts: false
    refcount bits: 16
    corrupt: false
    extended l2: false
 
No, it's not, the file is sparsely created as you can see here:

Code:
root@proxmox6 /mnt/pve/nfs/images/100 > ls -lh
total 2.2M
-rwxr-xr-x 1 root root 33G Aug  2 13:46 vm-100-disk-0.qcow2

root@proxmox6 /mnt/pve/nfs/images/100 > du vm-100-disk-0.qcow2
2228    vm-100-disk-0.qcow2

root@proxmox6 /mnt/pve/nfs/images/100 > qemu-img info vm-100-disk-0.qcow2
image: vm-100-disk-0.qcow2
file format: qcow2
virtual size: 32 GiB (34359738368 bytes)
disk size: 2.18 MiB
cluster_size: 65536
Format specific information:
    compat: 1.1
    compression type: zlib
    lazy refcounts: false
    refcount bits: 16
    corrupt: false
    extended l2: false
I'm afraid we are talking about different things. You are talking about what it actually "weights" in size and I'm talking about how system sees it.
Now imagine you need to copy this file to another storage (with RSYNC for example), how do you think, it will copy 32GB or 2.18MB over the network? Spoiler alert - it will copy whole 32GB of zeroes.
 
I'm afraid we are talking about different things.
That may be, I'm talking about thin provision on NFS, like the OP and all previous commentators in 2019.

You are talking about what it actually "weights" in size and I'm talking about how system sees it.
Now imagine you need to copy this file to another storage (with RSYNC for example), how do you think, it will copy 32GB or 2.18MB over the network? Spoiler alert - it will copy whole 32GB of zeroes.
It's a sparse file, of course you will copy the whole file, because sparse files read zero where no data is. So every program that'll read data, will also read the zeros. Sparse files (thin provisioning) only matters in disk space, not in read/stored data.

Depending on the network, you may have better luck with SSH compression enabled, but in most of the cases, it'll slow down things on modern hardware with >1 GBE.

If you rsync the file and do not specify the --sparse option, you will end up with a file that is no thin-provisioned anymore and uses all the space.
The best transfer times I got with creating the sparse file on the destination before rsyncing it. Then the rsync algorithms compares both files, which will read zero most of time and therefore you will only transfer the changed blockes.

Although, in my experiments with an empty file, this are the (total) runtimes:
  • rsync --sparse with precreated file took 00:00:59 and creates a sparse file
  • rsync --sparse and creating a new file took 00:01:24 and created a sparse file
  • plain rsync took 00:03:40 and create a thick file
 
Ok, thank you for clarification regarding "sparse files".
I think I mixed 2 different topics into one
a) trim not working via freebsd nfs4.2 implementation
b) file being "sparse"
 
That may be the case, yes. Have you tried Linux as a guest?
(I can say that trim over NFS works with Linux on both ends.)
You mean have I tried Linux as NFS server (since we are all talking about proxmox here as a client (and it's a Linux/Debian))? I know that Linux <> Linux (client <> server) works just fine.
This issue is only with Freebsd 13.* acting as NFS v4.2 server.
 
You mean have I tried Linux as NFS server (since we are all talking about proxmox here as a client (and it's a Linux/Debian))? I know that Linux <> Linux (client <> server) works just fine.
I also meant inside of the guest OS, TRIM is used there, not on the host.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!