Trim virtual drives

Eliott G.

Renowned Member
Aug 4, 2016
17
0
66
30
Hey everyone !

We were moving disks from a storage to another and noticed that when they arrive on the new storage, the thin provisioning expanded to full space.

Before, when we had a few VMs, we could use the old method to empty the disks (using dd to fill the disk with zeros and deleting this file afterward), but now we have dozens of huge drives and it's quite long to zero the entire thing then convert it.

We found that the page in the wiki concerning this problem was updated two months ago : https://pve.proxmox.com/wiki/Shrink_Qcow2_Disk_Files

So we tried the "Recommended Solution" but couldn't make it work.

We have :
  • A guest host with Debian 8.7.1
  • A qcow drive
  • A scsi controller with discard enabled
Then we use fstrim on the guest :

fstrim -av

/: 189,4 GiB (203339558912 bytes) trimmed​

And even after a reboot, the VM that is supposed to do ~2Go max :

df -h

/dev/sda1 191G 1,5G 180G 1% /
udev 10M 0 10M 0% /dev
tmpfs 1,2G 8,3M 1,2G 1% /run
tmpfs 3,0G 0 3,0G 0% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 3,0G 0 3,0G 0% /sys/fs/cgroup
Is almost 5Go big :

du -h vm-20002-disk-1.qcow2

4,7G vm-20002-disk-1.qcow2​


I really don't know what we are missing so any idea is welcome !
Thanks in advance !
 
Hi,

can you send the output of

qm config 20002
 
Thank you for the answer !

Here is the output of : qm config 20002

boot: cdn
bootdisk: scsi0
cores: 2
description: ############
ide2: none,media=cdrom
memory: 6144
name: ###########
net0: e1000=C6:96:11:76:32:64,bridge=vmbr2
numa: 1
onboot: 1
ostype: l26
scsi0: NASVMRBX01:20002/vm-20002-disk-1.qcow2,discard=on,size=200G
scsihw: virtio-scsi-pci
smbios1: uuid=8dd3deec-03b6-40a5-a1ca-d1dba45f05e2
sockets: 1​
 
I don't know if this is related, but on my VM's where I do manual trim I have discard=off. Discard=on simply means automatic trim and have nothing to do with whether you can trim or not.
 
Discard=on simply means automatic trim
This means that discard command will send to the storage backed.

What is you storage backed?
 
BTW. if you have a systemd based distro you can enable a timer based fstrim by issuing: systemctl enable fstrim.timer

In Debian Jessie you need to do this first:
cp /usr/share/doc/util-linux/examples/fstrim.service /etc/systemd/system
cp /usr/share/doc/util-linux/examples/fstrim.timer /etc/systemd/system
 
Our storage is a NFS and ever doing the fstrim manually, it doesn't shrink the virtual drive
 
We are already using qcow2 for virtual drives, so should it work with NFS I guess ? So far we meet every requirement.
 
Can see your using virtio-scsi due to "scsi0"

How big was the qcow file before you run the fstrim command?
 
Trim on nfs does not work.
The protocol do not support it.
 
Aww, too bad, thanks for the answer :)

Maybe it would be a good idea to list witch protocols support trim in the wiki ?

https://pve.proxmox.com/wiki/Shrink_Qcow2_Disk_Files

Have a good day !
Also, the wiki you refer to needs to be updated. It misses -o preallocation=metadata in the convert command to make qcow2 files like proxmox does so the convert command should be:
qemu-img convert -O qcow2 -o preallocation=metadata image.qcow2_backup image.qcow2

NB. using preallocation voids the use of -c (compression)
 
We are currently using the command given in the wiki to shrink the disks and we are having no problem, should we worry about it ? So is the compression incompatible with "preallocation=metadata" ?
 
We are currently using the command given in the wiki to shrink the disks and we are having no problem, should we worry about it ? So is the compression incompatible with "preallocation=metadata" ?
preallocation=metadata ensures that you don't over provision your storage (full space is reserved on the file system but not actually used)
Compare:
ls -lh vm-109-disk-1.qcow2
4.1G vm-109-disk-1.qcow2
ls -sh vm-109-disk-1.qcow2
2.8G vm-109-disk-1.qcow2
 
If you want to use NFS, only with NFSv4.2 the discard option can trim the qcow2 disk when you delete files.
I was looking for an open source storage appliance with NFSv4.2 and I found OviOS (based on ZFSonLinux).
Now I'm testing OviOS and it seems to reduce the used disk space!

I'm using jumbo frames on 10GbE and the following options on NFS protocol:
NFS export options (on OviOS) --> rw,async,no_root_squash,no_subtree_check
NFS mount options (on PVE) -----> vers=4.2,async,hard,tcp,noatime,rsize=1048576,wsize=1048576
 
  • Like
Reactions: trystan
Nice ! Thanks for the feedback !
We created our own Ceph cluster for the disk storage and it just works great !
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!