ZFS TRIM on Proxmox

norsemangrey

Active Member
Feb 8, 2021
64
9
28
41
Could someone please explain ZFS TRIM on Proxmox? Particularly the difference between setting autotrim=on and using zpool trim and best practices for use on SSD zpools an a Proxmox server (simple home lab server).
 
Last edited:
  • Like
Reactions: agroshev
The man page for zpool trim [1] provides a good explanation. Basically TRIM is used to discard unused storage blocks in the filesystem. This improves SSD performance and lifespan as it does not need to continually delete old blocks on every write. It also allows thinly provisioned storage to reclaim space that is unused by the guest operating system.

Particularly the difference between setting autotrim=on and using zpool trim
There are two main differences here. The first and most obvious is that autotrim=on allows trim operations to run automatically in the background while zpool trim must be called manually. The second is that zpool trim reclaims every free block it finds, while the autotrim feature waits until a large enough range of blocks have become free before it trims, for the sake of optimization [2].


best practices for use on SSD zpools an a Proxmox server (simple home lab server)
The zpoolprops man page [2] also mentions that autotrim can put significant stress on the underlying storage devices, depending on their quality. With this, it also implies that for lower end (non-enterprise grade) devices, running zpool trim periodically could be the better option. You could, for example, set up a cronjob to run zpool trim as often as you feel appropriate for your system and at times of lower activity.


[1] https://openzfs.github.io/openzfs-docs/man/8/zpool-trim.8.html
[2] https://openzfs.github.io/openzfs-docs/man/8/zpoolprops.8.html?highlight=autotrim
 
Last edited:
The man page for zpool trim [1] provides a good explanation. Basically TRIM is used to discard unused storage blocks in the filesystem. This improves SSD performance and lifespan as it does not need to continually delete old blocks on every write. It also allows thinly provisioned storage to reclaim space that is unused by the guest operating system.


There are two main differences here. The first and most obvious is that autotrim=on allows trim operations to run automatically in the background while zpool trim must be called manually. The second is that zpool trim reclaims every free block it finds, while the autotrim feature waits until a large enough range of blocks have become free before it trims, for the sake of optimization [2].



The zpoolprops man page [2] also mentions that autotrim can put significant stress on the underlying storage devices, depending on their quality. With this, it also implies that for lower end (non-enterprise grade) devices, running zpool trim periodically could be the better option. You could, for example, set up a cronjob to run zpool trim as often as you feel appropriate for your system and at times of lower activity.


[1] https://openzfs.github.io/openzfs-docs/man/8/zpool-trim.8.html
[2] https://openzfs.github.io/openzfs-docs/man/8/zpoolprops.8.html?highlight=autotrim

Thank you for the informative answer!
 
Interesting. I was not aware of zpool trim, but used fstrim -a instead, but I was not shure if it works, since the zvol are not mounted on the host.
It would trim the root filesystem, if you are using ZFS root. But it would not trim the ZVOLs that are the backing store for the VM's drives.
 
How is trim or autotrim used or set in proxmox, GUI or commandline? I have a PVE host w. 4x8TB RAID Z1-0 and only ONE large CT of 12TB.
rpool has 31,99TB but only 320GB free, 31,66TB allocated. I wonder where the "missing" 20TB went (o.k. ~20% for the RAID blocks, the OS, meta-data, logs, etc. but still...) There are no ZFS snapshots, no VZdumps or anything else that uses up more space then it should.

So I looked into trim and only got like:

Code:
:~# zfs get autotrim
bad property list: invalid property 'autotrim'
:~# zfs set autotrim=on rpool
cannot set property for 'rpool': invalid property 'autotrim'
:~# zfs get autotrim rpool
bad property list: invalid property 'autotrim'
:~# zpool get trim rpool
bad property list: invalid property 'trim'
:~# zfs version
zfs-2.2.8-pve1
zfs-kmod-2.2.8-pve1
:~# modinfo zfs | grep version
version:        2.2.8-pve1
srcversion:     571935691D8EEAF8FF853F9
vermagic:       6.8.12-15-pve SMP preempt mod_unload modversions
:~# zpool get all rpool | grep -i trim
rpool  autotrim                       off                            default
:~# zfs set autotrim=on rpool
cannot set property for 'rpool': invalid property 'autotrim'

What am I doing wrong? Was the host initially wrongly configured? How can I get autotrim to run and automatically free some space so the disks won't fill themselves up all the time?

Cheers,
~R.
 
  • Like
Reactions: 55hp
RTFM.
Code:
#zpool get/set autotrim
zpool get autotrim
zpool set autotrim=[on,off]

You can setup a cron job with something like:
Code:
zpool trim <zfs-pool-name>
see: man zpool trim
 
Last edited:
  • Like
Reactions: Riesling.Dry
Are you sure you need autotrim and that the behaviour you see is related to it? Also there's already a cronjob doing a zfs trim for you.
 
  • Like
Reactions: Riesling.Dry
Are you sure you need autotrim and that the behaviour you see is related to it? Also there's already a cronjob doing a zfs trim for you.
I am absolutely NOT sure, but I now did zpool set autotrim=on rpool (cheers @news :°) and hope it will free up some space :)
Where can I see this CRON job? Doenst show in the normal crontab.
What other issue could fill up the disk or why is it so full if trim is run regularly?
 
This is the content of /etc/cron.d/zfsutils-linux
Bash:
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

# TRIM the first Sunday of every month.
24 0 1-7 * * root if [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/trim ]; then /usr/lib/zfs-linux/trim; fi

# Scrub the second Sunday of every month.
24 0 8-14 * * root if [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ]; then /usr/lib/zfs-linux/scrub; fi

As far as I know, trimming, or lack thereof, should not affect the available space of the pool. This is before and after for one of my (non-RAID) pools
Bash:
NAME       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
bigdata   6.98T  4.20T  2.78T        -         -    34%    60%  1.00x    ONLINE  -
                                                                                                                                                                             pve-storage: Fri Dec 12 10:24:56 2025
NAME       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
bigdata   6.98T  4.20T  2.78T        -         -    21%    60%  1.00x    ONLINE  -

I'd recommend you create a new post for your space usage issue as I don't think it's related. If you do, please make sure to share at least this
Bash:
zpool status -v
zfs list -t all -ospace,refreservation
No need to quote, my message is directly above yours.
 
Last edited:
trim doesn't free up space - it just helps the SSD flash controller know which flash blocks do not have data in them, so it can pre-erase those so they are ready for use when needed.
 
  • Like
Reactions: LnxBil
you can also trim from timer/cron

Code:
    for ZPOOL_NAME_I in $(zpool list -H | awk '{print $1}'); do
        systemctl enable zfs-trim-monthly@$ZPOOL_NAME_I.timer --now
    done
 
Why is this better than simply calling /usr/lib/zfs-linux/trim as seen above?
 
Why is this better than simply calling /usr/lib/zfs-linux/trim as seen above?

/usr/lib/zfs-linux/trim is for cron.
from code looks like it:
  • trim all healthy pools,
  • with no trim running,
  • with zfs property "org.debian:periodic-trim" (auto|on), default=auto
  • and auto trim only if pool has nvme disk.


timer is for systemd.
systemd service looks like this:

Code:
# cat /usr/lib/systemd/system/zfs-trim@.service
[Unit]
Description=zpool trim on %i
Documentation=man:zpool-trim(8)
Requires=zfs.target
After=zfs.target
ConditionACPower=true
ConditionPathIsDirectory=/sys/module/zfs

[Service]
EnvironmentFile=-/etc/default/zfs
ExecStart=/bin/sh -c '\
if /usr/sbin/zpool status %i | grep -q "(trimming)"; then\
exec /usr/sbin/zpool wait -t trim %i;\
else exec /usr/sbin/zpool trim -w %i; fi'
ExecStop=-/bin/sh -c '/usr/sbin/zpool trim -s %i 2>/dev/null || true'

check code and you can correct me.
you can judge on your own. bud every has it purpose.

if you are activating systemd service, you know you have nvme there, and want to trim it at specified timer (weekly, monthly...).
looks like this service is not looking for health status of pool. this looks like only drawback.
 
Last edited:
How is trim or autotrim used or set in proxmox, GUI or commandline? I have a PVE host w. 4x8TB RAID Z1-0 and only ONE large CT of 12TB.
rpool has 31,99TB but only 320GB free, 31,66TB allocated. I wonder where the "missing" 20TB went (o.k. ~20% for the RAID blocks, the OS, meta-data, logs, etc. but still...) There are no ZFS snapshots, no VZdumps or anything else that uses up more space then it should.

So I looked into trim and only got like:

Code:
:~# zfs get autotrim
bad property list: invalid property 'autotrim'
:~# zfs set autotrim=on rpool
cannot set property for 'rpool': invalid property 'autotrim'
:~# zfs get autotrim rpool
bad property list: invalid property 'autotrim'
:~# zpool get trim rpool
bad property list: invalid property 'trim'
:~# zfs version
zfs-2.2.8-pve1
zfs-kmod-2.2.8-pve1
:~# modinfo zfs | grep version
version:        2.2.8-pve1
srcversion:     571935691D8EEAF8FF853F9
vermagic:       6.8.12-15-pve SMP preempt mod_unload modversions
:~# zpool get all rpool | grep -i trim
rpool  autotrim                       off                            default
:~# zfs set autotrim=on rpool
cannot set property for 'rpool': invalid property 'autotrim'

What am I doing wrong? Was the host initially wrongly configured? How can I get autotrim to run and automatically free some space so the disks won't fill themselves up all the time?

Cheers,
~R.
I also noticed that issue with my Proxmox Lab environment. I had never seen that with Proxmox 8, only after upgrading to Proxmox 9. Furthermore, I have 800 GB total space shown, where I should have over 1,5 TB. I've read that reinstalling root with ISO might fix the issue?

I tried trimming Proxmox root, zpool trim, and VM trimming. Searched root for huge files without success. Destroyed and recovered VM disks without improvements. It seems ZFS is allocating zombie space.
 
Last edited:
Please share

Bash:
root@pve:~# zpool status
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 00:44:47 with 0 errors on Wed Dec 31 16:45:20 2025
config:

        NAME                                     STATE     READ WRITE CKSUM
        rpool                                    ONLINE       0     0     0
          nvme-CT2000P3PSSD8_2411E89FF323-part3  ONLINE       0     0     0

errors: No known data errors

root@pve:~# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool  1.81T  1.51T   306G        -         -    42%    83%  1.00x    ONLINE  -

root@pve:~# zfs list -tall -ospace
NAME                      AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
rpool                      248G  1.51T        0B    104K             0B      1.51T
rpool/ROOT                 248G  1.07T        0B     96K             0B      1.07T
rpool/ROOT/pve-1           248G  1.07T        0B   1.07T             0B         0B
rpool/data                 248G   449G        0B     96K             0B       449G
rpool/data/vm-100-disk-0   248G  4.39G        0B   4.39G             0B         0B
rpool/data/vm-101-disk-0   248G  3.99G        0B   3.99G             0B         0B
rpool/data/vm-102-disk-0   248G   283G        0B    283G             0B         0B
rpool/data/vm-103-disk-0   248G  6.67G        0B   6.67G             0B         0B
rpool/data/vm-104-disk-0   248G  3.99G        0B   3.99G             0B         0B
rpool/data/vm-105-disk-0   248G  4.50G        0B   4.50G             0B         0B
rpool/data/vm-107-disk-0   248G   142G        0B    142G             0B         0B
rpool/var-lib-vz           248G   619M        0B    619M             0B         0B


root@pve:~# df -h
Filesystem        Size  Used Avail Use% Mounted on
udev               24G     0   24G   0% /dev
tmpfs             4,7G  5,3M  4,7G   1% /run
rpool/ROOT/pve-1  1,4T  1,1T  249G  82% /
tmpfs              24G   46M   24G   1% /dev/shm
efivarfs          192K  121K   67K  65% /sys/firmware/efi/efivars
tmpfs              24G  8,0K   24G   1% /tmp
tmpfs             5,0M     0  5,0M   0% /run/lock
tmpfs             1,0M     0  1,0M   0% /run/credentials/systemd-journald.service
rpool             249G  128K  249G   1% /rpool
rpool/var-lib-vz  249G  620M  249G   1% /var/lib/vz
rpool/ROOT        249G  128K  249G   1% /rpool/ROOT
rpool/data        249G  128K  249G   1% /rpool/data
/dev/sda2         3,6T  2,6T  879G  75% /mnt/backup
/dev/fuse         128M   28K  128M   1% /etc/pve
tmpfs             1,0M     0  1,0M   0% /run/credentials/getty@tty1.service
tmpfs             4,7G  8,0K  4,7G   1% /run/user/0

EDIT: I found out what happened. I have an external drive mounted at /mnt/backup for backups. At some point the mount must have failed, and the system wrote automatic backups to my root instead of the external drive. I now have 1,9 TB free to use :D After unmounting, I was able to see the zombie root files in /mnt/backup.
 
Last edited: