zpool trim [1] provides a good explanation. Basically TRIM is used to discard unused storage blocks in the filesystem. This improves SSD performance and lifespan as it does not need to continually delete old blocks on every write. It also allows thinly provisioned storage to reclaim space that is unused by the guest operating system.There are two main differences here. The first and most obvious is thatParticularly the difference between settingautotrim=onand usingzpool trim
autotrim=on allows trim operations to run automatically in the background while zpool trim must be called manually. The second is that zpool trim reclaims every free block it finds, while the autotrim feature waits until a large enough range of blocks have become free before it trims, for the sake of optimization [2].The zpoolprops man page [2] also mentions that autotrim can put significant stress on the underlying storage devices, depending on their quality. With this, it also implies that for lower end (non-enterprise grade) devices, runningbest practices for use on SSD zpools an a Proxmox server (simple home lab server)
zpool trim periodically could be the better option. You could, for example, set up a cronjob to run zpool trim as often as you feel appropriate for your system and at times of lower activity.The man page forzpool trim[1] provides a good explanation. Basically TRIM is used to discard unused storage blocks in the filesystem. This improves SSD performance and lifespan as it does not need to continually delete old blocks on every write. It also allows thinly provisioned storage to reclaim space that is unused by the guest operating system.
There are two main differences here. The first and most obvious is thatautotrim=onallows trim operations to run automatically in the background whilezpool trimmust be called manually. The second is thatzpool trimreclaims every free block it finds, while the autotrim feature waits until a large enough range of blocks have become free before it trims, for the sake of optimization [2].
The zpoolprops man page [2] also mentions that autotrim can put significant stress on the underlying storage devices, depending on their quality. With this, it also implies that for lower end (non-enterprise grade) devices, runningzpool trimperiodically could be the better option. You could, for example, set up a cronjob to runzpool trimas often as you feel appropriate for your system and at times of lower activity.
[1] https://openzfs.github.io/openzfs-docs/man/8/zpool-trim.8.html
[2] https://openzfs.github.io/openzfs-docs/man/8/zpoolprops.8.html?highlight=autotrim
It would trim the root filesystem, if you are using ZFS root. But it would not trim the ZVOLs that are the backing store for the VM's drives.Interesting. I was not aware of zpool trim, but used fstrim -a instead, but I was not shure if it works, since the zvol are not mounted on the host.
:~# zfs get autotrim
bad property list: invalid property 'autotrim'
:~# zfs set autotrim=on rpool
cannot set property for 'rpool': invalid property 'autotrim'
:~# zfs get autotrim rpool
bad property list: invalid property 'autotrim'
:~# zpool get trim rpool
bad property list: invalid property 'trim'
:~# zfs version
zfs-2.2.8-pve1
zfs-kmod-2.2.8-pve1
:~# modinfo zfs | grep version
version: 2.2.8-pve1
srcversion: 571935691D8EEAF8FF853F9
vermagic: 6.8.12-15-pve SMP preempt mod_unload modversions
:~# zpool get all rpool | grep -i trim
rpool autotrim off default
:~# zfs set autotrim=on rpool
cannot set property for 'rpool': invalid property 'autotrim'
#zpool get/set autotrim
zpool get autotrim
zpool set autotrim=[on,off]
zpool trim <zfs-pool-name>
man zpool trimI am absolutely NOT sure, but I now didAre you sure you need autotrim and that the behaviour you see is related to it? Also there's already a cronjob doing a zfs trim for you.
zpool set autotrim=on rpool (cheers @news :°) and hope it will free up some space /etc/cron.d/zfsutils-linuxPATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# TRIM the first Sunday of every month.
24 0 1-7 * * root if [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/trim ]; then /usr/lib/zfs-linux/trim; fi
# Scrub the second Sunday of every month.
24 0 8-14 * * root if [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ]; then /usr/lib/zfs-linux/scrub; fi
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
bigdata 6.98T 4.20T 2.78T - - 34% 60% 1.00x ONLINE -
pve-storage: Fri Dec 12 10:24:56 2025
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
bigdata 6.98T 4.20T 2.78T - - 21% 60% 1.00x ONLINE -
zpool status -v
zfs list -t all -ospace,refreservation
Why is this better than simply calling/usr/lib/zfs-linux/trimas seen above?
# cat /usr/lib/systemd/system/zfs-trim@.service
[Unit]
Description=zpool trim on %i
Documentation=man:zpool-trim(8)
Requires=zfs.target
After=zfs.target
ConditionACPower=true
ConditionPathIsDirectory=/sys/module/zfs
[Service]
EnvironmentFile=-/etc/default/zfs
ExecStart=/bin/sh -c '\
if /usr/sbin/zpool status %i | grep -q "(trimming)"; then\
exec /usr/sbin/zpool wait -t trim %i;\
else exec /usr/sbin/zpool trim -w %i; fi'
ExecStop=-/bin/sh -c '/usr/sbin/zpool trim -s %i 2>/dev/null || true'
I also noticed that issue with my Proxmox Lab environment. I had never seen that with Proxmox 8, only after upgrading to Proxmox 9. Furthermore, I have 800 GB total space shown, where I should have over 1,5 TB. I've read that reinstalling root with ISO might fix the issue?How is trim or autotrim used or set in proxmox, GUI or commandline? I have a PVE host w. 4x8TB RAID Z1-0 and only ONE large CT of 12TB.
rpool has 31,99TB but only 320GB free, 31,66TB allocated. I wonder where the "missing" 20TB went (o.k. ~20% for the RAID blocks, the OS, meta-data, logs, etc. but still...) There are no ZFS snapshots, no VZdumps or anything else that uses up more space then it should.
So I looked into trim and only got like:
Code::~# zfs get autotrim bad property list: invalid property 'autotrim' :~# zfs set autotrim=on rpool cannot set property for 'rpool': invalid property 'autotrim' :~# zfs get autotrim rpool bad property list: invalid property 'autotrim' :~# zpool get trim rpool bad property list: invalid property 'trim' :~# zfs version zfs-2.2.8-pve1 zfs-kmod-2.2.8-pve1 :~# modinfo zfs | grep version version: 2.2.8-pve1 srcversion: 571935691D8EEAF8FF853F9 vermagic: 6.8.12-15-pve SMP preempt mod_unload modversions :~# zpool get all rpool | grep -i trim rpool autotrim off default :~# zfs set autotrim=on rpool cannot set property for 'rpool': invalid property 'autotrim'
What am I doing wrong? Was the host initially wrongly configured? How can I get autotrim to run and automatically free some space so the disks won't fill themselves up all the time?
Cheers,
~R.
Please share
Bash:root@pve:~# zpool status pool: rpool state: ONLINE scan: scrub repaired 0B in 00:44:47 with 0 errors on Wed Dec 31 16:45:20 2025 config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 nvme-CT2000P3PSSD8_2411E89FF323-part3 ONLINE 0 0 0 errors: No known data errors root@pve:~# zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT rpool 1.81T 1.51T 306G - - 42% 83% 1.00x ONLINE - root@pve:~# zfs list -tall -ospace NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD rpool 248G 1.51T 0B 104K 0B 1.51T rpool/ROOT 248G 1.07T 0B 96K 0B 1.07T rpool/ROOT/pve-1 248G 1.07T 0B 1.07T 0B 0B rpool/data 248G 449G 0B 96K 0B 449G rpool/data/vm-100-disk-0 248G 4.39G 0B 4.39G 0B 0B rpool/data/vm-101-disk-0 248G 3.99G 0B 3.99G 0B 0B rpool/data/vm-102-disk-0 248G 283G 0B 283G 0B 0B rpool/data/vm-103-disk-0 248G 6.67G 0B 6.67G 0B 0B rpool/data/vm-104-disk-0 248G 3.99G 0B 3.99G 0B 0B rpool/data/vm-105-disk-0 248G 4.50G 0B 4.50G 0B 0B rpool/data/vm-107-disk-0 248G 142G 0B 142G 0B 0B rpool/var-lib-vz 248G 619M 0B 619M 0B 0B root@pve:~# df -h Filesystem Size Used Avail Use% Mounted on udev 24G 0 24G 0% /dev tmpfs 4,7G 5,3M 4,7G 1% /run rpool/ROOT/pve-1 1,4T 1,1T 249G 82% / tmpfs 24G 46M 24G 1% /dev/shm efivarfs 192K 121K 67K 65% /sys/firmware/efi/efivars tmpfs 24G 8,0K 24G 1% /tmp tmpfs 5,0M 0 5,0M 0% /run/lock tmpfs 1,0M 0 1,0M 0% /run/credentials/systemd-journald.service rpool 249G 128K 249G 1% /rpool rpool/var-lib-vz 249G 620M 249G 1% /var/lib/vz rpool/ROOT 249G 128K 249G 1% /rpool/ROOT rpool/data 249G 128K 249G 1% /rpool/data /dev/sda2 3,6T 2,6T 879G 75% /mnt/backup /dev/fuse 128M 28K 128M 1% /etc/pve tmpfs 1,0M 0 1,0M 0% /run/credentials/getty@tty1.service tmpfs 4,7G 8,0K 4,7G 1% /run/user/0
We use essential cookies to make this site work, and optional cookies to enhance your experience.