How to get trim/UNMAP working in this Proxmox setup?

Jago

Member
May 9, 2019
9
1
23
41
I have a Proxmox 6 host that has a ZFS mirror consisting of 2 NVME drives and 4 spinners in a "raid10-like" ZFS configuration (a stripe of 2 mirrors)
It has 1 Linux server VM acting as an iSCSI target. The VM has an OS drive, a datadrive thats on the NVME pool and a datadrive thats on the spinner pool. ALL drives of this VM have the "discard" option enabled in Proxmox. If I write data to OS drive, I see data usage grow in the underlying ZFS volume on Proxmox host. If I delete data on OS drive of the VM, I see space freed up on the host and not just the VM.

Now the tricky part:

The datadrives of this VM are exported via iSCSI to a secondary bare metal Linux server. The LUNs are utilizing bare block devices with no partition/filesystem. The exported drives are formatted as XFS and mounted on the secondary server. Before setting emulate_tpu=1 on both LUNs, attempting to run fstrim on the secondary server would result in an "operation not supported" error. After enabling it, it LOOKS as if it's working:

fstrim -v /mnt/iscsi-podman-fast
/mnt/iscsi-podman-fast: 255.9 GiB (274743492608 bytes) trimmed

But neither simply deleting the data nor running the fstrim command ends up actually reducing data use on the Proxmox host. Can't run fstrim on the Linux VM running iSCSI target either, since it wants to be run on against a mounted filesystem, not a dumb block device. Would this work if instead of block devices, iSCSI was running on top of a file? Is "iSCSI on top of sparse files" even a thing?
 
Last edited:
Figured it out and finally got it to work the way I wanted.

Destroyed everything iscsi-related first, including the virtual disks on the VM running the targets. Created new nvme-backed and sata-backed virtual disks, made sure discard=on and ssd=1 for both in disk settings on Proxmox. Created XFS filesystems on both virtual disks inside the VM running iscsi targets. Made sure fstab has "discard" option enabled on both. Created sparse files on both drives. Configured iscsi with targetcli to use the sparse files as the backend for my published targets. Re-ran iscsi discovery on the baremetal Linux client machine. Logged in. Formatted discovered iscsi targets as XFS, enabled "discard" in fstab.

Voila! Now the entire chain works: baremetal Linux machine writes data to disk, later deletes it, I see a few seconds later the used disk space free up in the VM running iscsi targets and a few seconds after that, the same is happening on the Proxmox host ZFS volume backing said VM. Yay.
 
Last edited:
  • Like
Reactions: leesteken

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!