[SOLVED] Question about qocw2 and thin provisioning

Valerio Pachera

Active Member
Aug 19, 2016
131
6
38
42
I successfully tested thin provisioning on lvm-thin.
I converted a lv into a qcow2 file.

Code:
create full clone of drive scsi0 (vm:vm-107-disk-1)
Formatting '/mnt/pve/nas-ufficio/images/107/vm-107-disk-1.qcow2', fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16

The first thing I find strange is the file size that is equal to the virtual size (like a non allocated raw file).
Do you know why?

Code:
ls -lh
-rw-r----- 1 root root 11G ott 21 18:15 vm-107-disk-1.qcow2

du -sh vm-107-disk-1.qcow2
1,1G    vm-107-disk-1.qcow2

qemu-img info vm-107-disk-1.qcow2
image: vm-107-disk-1.qcow2
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 1.1G
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

What I'm trying to achieve is to shrink the qcow2 file after data has been remove.
I expect that to work because qcow2 support discard.

Guest
dd if=/dev/urandom of=c bs=1M count=1024

Host
du -sh vm-107-disk-1.qcow2
2,1G vm-107-disk-1.qcow2

Guest
rm c
fstrim /

Host
du -sh vm-107-disk-1.qcow2
2,1G vm-107-disk-1.qcow2

As you can see, qcow2 doesn't shrink.
The storage is nfs.

I probably misunderstood something about thin-provisioning and qcow2.
What do you think about it?
 
  • Like
Reactions: elmacus
The first thing I find strange is the file size that is equal to the virtual size (like a non allocated raw file).
the file size is only metadata, as you can see the real size (du output) is smaller

As you can see, qcow2 doesn't shrink.
did you check the "discard" checkbox for this disk ? (also this works only for virtio-blk and virtio-scsi)
 
(also this works only for virtio-blk and virtio-scsi)
Actually you'll have to use virtio-scsi as virtio-blk doesn't currently support this.
 
The storage is nfs.
This is the key!
I moved the vdi to local storage /var/lib/vz and shrink/discard works right away.
I don't know why, but it doesn't work on NFS storage.

If I create or convert a qcow2 file by qemu-img, the size shown is this:
Code:
qemu-img create -f qcow2 test.qcow2 1G
ls -lh test.qcow2
-rw-r--r-- 1 root root 193K ott 23 16:35 test.qcow2
So I wonder a bit about the metadata mentioned by dcsapak.

Actually you'll have to use virtio-scsi as virtio-blk doesn't currently support this.
Sorry, I forgot to mention I was using virtio-scsi and discard was checked.
 
I find this confusing and I'm hoping someone can expand on the topic a bit. I too am using NFS for storage on a Solaris server using ZFS with refreservation=none (thin-provisioned) and qcow2 drives. Based on what I read here NFS does support snapshots of qcow2 format. I'm using VirtIO SCSI as the controller and am using a SCSI device with Discard enabled as described here. I've attempted running trim on both windows VM's as well as linux with no improvement. My disks are up to twice the size of the provisioned space. The only solution that does work is the manual processed described here. I must be overlooking something but for the life of me I can't find it.

Conf file content:
Code:
agent: 1
balloon: 512
boot: cdn
bootdisk: scsi0
cores: 1
ide2: none,media=cdrom
memory: 2048
name: Debian1
net0: virtio=E6:5C:5D:7F:66:1E,bridge=vmbr0
numa: 0
ostype: l26
parent: PreUpdate
scsi0: VM-Svr-HDs:102/vm-102-disk-1.qcow2,discard=on,size=15G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=9749d47b-bd19-4640-a28d-b9b0a2c125e5
sockets: 1
startup: order=1

qemu-img info:
Code:
file format: qcow2
virtual size: 15G (16106127360 bytes)
disk size: 18G
cluster_size: 65536
Snapshot list:
ID        TAG                 VM SIZE                DATE       VM CLOCK
4         Preupdate                 0 2019-09-14 14:57:14   00:00:00.000
5         PreUpdate                 0 2019-10-01 07:03:21   00:00:00.000
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false
 
Hi FreeAgent,

in NFS up to v4.1 the trim is not supported and you have to shrink manually.
Only with NFSv4.2 the discard option can trim the qcow2 disk when you delete files.

I was looking for an open source storage appliance with NFSv4.2 and I found OviOS (based on ZFSonLinux).
OviOS seems to have problems with two of the bonding modes: with balance-tlb and balance-alb if you disconnect a NIC this makes communication lost.

Now I use openSUSE 15.1 as Storage Software with NFSv4.2 support and all bonding modes working correctly.

I'm using jumbo frames on 10GbE and the following options on NFS protocol:
NFS export options (on SAN) --> rw,async,no_root_squash,no_subtree_check
NFS mount options (on PVE) --> vers=4.2,async,hard,tcp,noatime,rsize=524288,wsize=524288

I hope this can help you.
 
Last edited:
Hi FreeAgent,

in NFS up to v4.1 the trim is not supported and you have to shrink manually.
Only with NFSv4.2 the discard option can trim the qcow2 disk when you delete files.

I was looking for an open source storage appliance with NFSv4.2 and I found OviOS (based on ZFSonLinux).
OviOS seems to have problems with two of the bonding modes: with balance-tlb and balance-alb if you disconnect a NIC this makes communication lost.

Now I use openSUSE 15.1 as Storage Software with NFSv4.2 support and all bonding modes working correctly.

I'm using jumbo frames on 10GbE and the following options on NFS protocol:
NFS export options (on SAN) --> rw,async,no_root_squash,no_subtree_check
NFS mount options (on PVE) --> vers=4.2,async,hard,tcp,noatime,rsize=524288,wsize=524288

I hope this can help you.
Hello

So you are saying that TRIM/discard is working for you in NFS/qcow2 situation? (openSuse with NFSv4.2)?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!