ZFS 2.3.0 has been released, how long until its available?

Given what's mentioned a few posts above [1], it could be released with PVE9 that will be based on Debian Trixie. Given that Trixie has not reached full freeze yet [2] but expecting that may happen soon, we could expect a PVE9 release around Q3 this year (it think I've seen some info posted by proxmox staff about an estimated release date but can't find it right now).

[1] https://forum.proxmox.com/threads/z...w-long-until-its-available.160639/post-772358
[2] https://release.debian.org/trixie/freeze_policy.html
Debian Trixie is scheduled to be published as new stable release on 9th of August 2025 (ref https://lists.debian.org/debian-devel-announce/2025/07/msg00003.html).
 
that bug seems to be there already quite a bit earlier then 2.3 (or potentially it's a different issue, the last comment doesn't really have any details)
 
We looked into this and found that if you end up havind updating your kernel and ZFS at the same time, while the running kernel is somewhat older than the one installed with updateing the system, you can:

1) make sure you have headers installed for the new kernel verison
2) have dkms building got the new kernel version

However, it is much more convenient to start with a fully updated, stable and clean booted system.

Let me know if you need any further help on this.
Now that Proxmox 9 is out (and includes 2.3+!!!), I assume there will be steps needed to remove the manually added ZFS module before upgrading. Could you provide steps for those of us who used your wonderful repo to follow for a successful upgrade? :)
 
Now that Proxmox 9 is out (and includes 2.3+!!!), I assume there will be steps needed to remove the manually added ZFS module before upgrading. Could you provide steps for those of us who used your wonderful repo to follow for a successful upgrade? :)
Im on proxmox 9 and can confirm it does work and with some tweaks and sync=standard, i am seeing a huge performance difference.

Also i would backup your VMs/CTs and /etc (probably your whole drive too if you can just to be sure you have everything in case you need a file, its not like proxmox is that big anyways) and reinstall it or be prepared to at least.

I went the upgrade route and it just nuked my system and said it was finished. Basically deleted everything related to proxmox and stopped, which I guess this happened to a few other people too. So I personally would just reinstall.
 
Last edited:
Im on proxmox 9 and can confirm it does work and with some tweaks and sync=standard, i am seeing a huge performance difference.

Also i would backup your VMs/CTs and /etc (probably your whole drive too if you can just to be sure you have everything in case you need a file, its not like proxmox is that big anyways) and reinstall it or be prepared to at least.

I went the upgrade route and it just nuked my system and said it was finished. Basically deleted everything related to proxmox and stopped, which I guess this happened to a few other people too. So I personally would just reinstall.
What tweaks exactly?
 
Now that Proxmox 9 is out (and includes 2.3+!!!), I assume there will be steps needed to remove the manually added ZFS module before upgrading. Could you provide steps for those of us who used your wonderful repo to follow for a successful upgrade? :)
I had zfs 2.3 installed on pve 8 using the dkms modules from debian backports.

Updating to pve 9 and its included zfs 2.3 modules is very natural. The pve8to9 upgrade check script does warn about dkms modules being installed, but as long as you don't have kernel-headers installed for the pve 9 kernel, the upgrade does not build dkms modules for the pve 9 kernel. So after a the reboot you run the pve 9 kernel with its own zfs modules. You can then uninstall the old kernel and anything zfs-dkms related.

I suppose this works the same with previously installed dkms modules from any other repo.
 
What tweaks exactly?
mostly sync=standard seems to have made a huge difference which didn't seem to make much of one at all before.

i recreated my small pool and used zpool upgrade <poolname> to upgrade my 12TB drive, which if you upgrade an old pool to make use of fast dedup over old dedup, you need to create new datasets with a different algorithm (i ended up using checksum=skein dedup=skein,verify on my backups dataset after some testing ) otherwise it won't make a new table and use fast dedup. (if you use it on any of your datasets )

i also set these in /etc/modprobe.d/zfs.conf
Code:
options zfs zfs_arc_max=10087301120
options zfs zfs_vdev_min_pending=1
options zfs zfs_vdev_max_pending=32
options zfs zfs_txg_timeout=40
options zfs zfs_no_write_throttle=1
options zfs zfs_dirty_data_max_max_percent=50
options zfs zfs_dirty_data_max_percent=50
options zfs zfs_delay_min_dirty_percent=80
for zfs_vdev_*_pending= (got the idea from here ZFS Slow Performance Fix), it depends on drive type, 1-8 for sata drives, 32 for SAS drives i believe.
but i am not entirely sure about everything i set I'm just experimenting here.

i ended up using these on my backups dataset (and most my other datasets just with lower zstd compression and different record sizes depending on content)
Code:
zfs set primarycache=metadata twelve/backupz
zfs set compression=zstd-19 twelve/backupz
zfs set recordsize=128k twelve/backupz
zfs set checksum=skein twelve/backupz
primarycache=metadata = for data not frequently accessed to not waste the arc, this twelve pool is all for backups and files so basically nothing on it will be frequently re-read.
compression=zstd-19, i usually use zstd for files that mostly dont compress well but might have a file here and there that will, off for media, 3-7 for basic files / fast access, 11-13 for files that are almost never accessed but i dont want to be slow but 19 is the highest compression and slowest.
recordsize=128k, i usually use 128k for compressible / small files, backups, etc, 256-512k for mixed files, 1M-2M for media, 4M-8M for AI models
for dedup= / checksum settings = Checksums and Their Use in ZFS skein / blake3 seem to be best unless you want to be extremely sure there is absolutely no risk of hash collisions, i think sha512/sha256 are best (for no risk of hash collision and data integrity) but sha256 is the default for dedup so you need a different algorithm if upgrading a pool and not creating a new pool.

there may be more experienced people who can offer more in depth / better advice on this though, I'm again just experimenting here going off everything i have been reading on ZFS
 
Last edited:
I had zfs 2.3 installed on pve 8 using the dkms modules from debian backports.

Updating to pve 9 and its included zfs 2.3 modules is very natural. The pve8to9 upgrade check script does warn about dkms modules being installed, but as long as you don't have kernel-headers installed for the pve 9 kernel, the upgrade does not build dkms modules for the pve 9 kernel. So after a the reboot you run the pve 9 kernel with its own zfs modules. You can then uninstall the old kernel and anything zfs-dkms related.

I suppose this works the same with previously installed dkms modules from any other repo.
Well... my upgrade did not go so smoothly. dist-upgrade crashed and starting throwing errors about the zfs module not compiling.
`Autoinstall on 6.14.8-2-pve failed for module(s) zfs(6)`

```
Errors were encountered while processing:
zfs-dkms
proxmox-kernel-6.14.8-2-pve-signed
proxmox-kernel-6.14
proxmox-default-kernel
proxmox-ve
Error: Sub-process /usr/bin/dpkg returned an error code (1)
```

I fumbled around a bit and got it working. I basically undid the steps from https://github.com/MEIT-REPO/proxmox-zfs. I removed the entry from `/etc/apt/sources.list.d/my_list_file.list`, I did `apt remove` for all the packages it installed. I re-ran apt update and dist-upgrade. It didn't show any zfs packages installed, which made me nervous, but after rebooting the standard proxmox zfs 2.3.3 was installed, and it detected my existing pool without issue.