[SOLVED] Trimming VM Storage, cannot get it to work: Linux Mirrored ZFS -> raw disk -> Linux VM

Oct 21, 2020
3
2
3
44
Hi all.
I know there is many threads on this topic, unfortunately none have given me the hints I need to resolve my issue.

I am running a proxmox server with the following:
  1. Physical SSDs configured as mirrored ZFS drives creating a zpool called "zpool2".
  2. The zpool "vmpool2" has both compression and encryption enabled.
  3. A proxmox storage target configured to use a ZFS dataset "vmpool2/data", configured with thin storage ticked.
  4. An imported drive, "vm-109-disk-0" which shows as "raw" storage in the proxmox GUI in this storage target.
  5. "vm-109-disk-0" configured for use by a linux VM, showing as "scsi0" and "discard=on", with a "VirtIO SCSI" controller.
  6. The "vm-109-disk-0" is 100Gb, of which approximately 20Gb is used.
With all this configured, I would have assumed that if I run "fstrim -av" in both the VM, and on the proxmox host I would see the drive change to only show 20Gb of disk space used, however in proxmox the GUI shows 100Gb usage still.

It's worth noting:
  1. I forgot to flag the storage as "thin" before importing the drive.
  2. As this is ZFS, I cannot use qcow2, it must be raw storage for the VM drive right?
  3. The imported drive was flagged with "discard" only after it was imported.
I'm hoping someone might have a suggestion why this setup does not seem to allow the VM drive to use only 20Gb of disk space even with all the trim/discard/thin storage configuration set up.

My best guess at the moment is that "raw" storage on ZFS simply doesn't allow thin storage. Is this true?

Bonus question. ZFS Compression is configured as enabled, and so even without thin storage this drive is using about 10Gb only (zpool list shows this), but in the proxmox GUI the available storage is 100Gb less than the total disk size. I also know from another install I have, proxmox does not use zpool free storage. How is ZFS compression supposed to help in proxmox if the compressed size isn't considered?


Thanks
Jamie
 
Hi Jamie,

generally using ZFS with thin provisioning works as designed. It only uses occupied block on the pool as "allocated" storage.
That means, in general, that you are able to overprovision your storage.

First of all though: I don't get step 1. What did you do with zpool2? Is it used for anything? Whats the connection?

To your "worth noting" points:
1. Well, that seems to be your problem. If it's not thin provisioned, a 100G disk will occupie 100G regardless of what data in the VM is actually used.
2. This is not true. You can mount your ZFS Pool and configure proxmox to use a directory storage to the mountpoint of the ZPool. This is just not the way it's usually done. And you'll have less features if you use that methode.

How did you "import" the drive?
With the proxmox "Move disk" Option?
 
Hi AKA,
Thanks for replying. To answer your questions.

1. I explicitly created a zpool and assigned in PVE using:

zpool create -f -o ashift=12 vmpool2 mirror /dev/sda /dev/sdb
zfs create -o encryption=on vmpool2/data
pvesm add zfspool local-vmpool2 -pool vmpool2/data

The VM disks are then stored to this pool (local-vmpool2)

To import, I used:

qm importdisk 105 fromqemu.qcow2 local-vmpool1
qm set 105 --scsi0 local-vmpool1:vm-105-disk-0

Cheers
 
raw storage on ZFS can be thin, but it needs to be selected before creating the disk. However, in the Contents view, Proxmox will show the maximum size, not the actual.
Use zfs get all zpool2/vm-109-disk-0 to get the actual usage (usedbydataset) and you might be able to change it to thin by setting refreservation to none IIRC.
 
  • Like
Reactions: Oper
Hi avw,

Ah right, your comment about refreservation was enough to help. Importing the VM drive(s) from other servers without the thin provisioning flag set on the storage location in proxmox means that refreservation was set to the size of the disk.

(actually I have no idea if it was due to the storage in proxmox, or the import, or the source file)

By running:

zfs set refreservation=none vmpool2/data/vm-109-disk-0

I saw an immediate decrease in disk usage as shown in proxmox's GUI. Usage is now in line with the disk usage within the VMs, which is what I wanted.

Testing, I can confirm that creating a 10Gb file inside the VM increases the disk usage in the proxmox GUI and in terms of what "zfs get all" shows. Deleting the 10Gb file and running "fstrim -av" within the VM shows an immediate decrease in disk usage again at all layers.

Thanks heaps! that was the subtle flag I was missing
 
  • Like
Reactions: Oper and leesteken

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!