ZFS 2.3.0 has been released, how long until its available?

We use Linstor on ZFS and then provision VMs with Linstor as storage. It does not matter what FS your guest is using.
 
In our testing with ZFS 2.3, Linstor on NVMe devices, we've achieved impressive results, particularly when leveraging NVMe-oF (RDMA) over a 100 Gbit ring topology in a three-node cluster, 2x mirror on two nodes 1 and 2, accessed from diskless node 3. Our primary storage configurations include NVMe and SAS SSDs, with the most significant performance gains coming from Gen4 and Gen5 NVMe drives - a result that was expected but still remarkable, up to 2x and reliable constant 100GBit wire speed, even more locally.

Key Takeaways from Our Testing​

  • Diskless Node Access: The real game-changer was accessing storage from a diskless node via NVMe-oF (RDMA), maximizing throughput and minimizing latency.
  • Network Efficiency: Running on a 100 Gbit ring topology ensured ultra-low-latency data access across the cluster and is very affordeable.
  • Stability & Reliability: Since the beginning of our tests with ZFS 2.3-rc1, we have encountered zero issues - a testament to its robustness.
  • Dataset-Level Control: One of ZFS strengths is the ability to control storage behavior dynamically at the dataset level during runtime.

Ready for Production?​

Based on our experience, we confidently recommend using ZFS 2.3 in production environments. The combination of high-speed networking, NVMe-oF, and Gen5 NVMe drives delivers exceptional performance while maintaining stability. However, this is our personal experience, not a paid endorsement - just a genuine and personal recommendation from the field.
Very interesting. How many iops did you get on Guest on node1 and node3?
 
Sorry to not push this further - to my experience, those numbers don't mean anything without context. It highly depend on other hardware like CPU, Networking in this specific case etc. - I don't want to place here figures which might get wrongly interpreted. What exactly are you looking for, to compare those numbers with?
 
Sorry to not push this further - to my experience, those numbers don't mean anything without context. It highly depend on other hardware like CPU, Networking in this specific case etc. - I don't want to place here figures which might get wrongly interpreted. What exactly are you looking for, to compare those numbers w
I want to compare iops on 2 local nvme vs raidz1 zfs 2.3 direct i/o mirror on thiese nvme.
 
Last edited:
We don't run raid-z in out production environments. We run Linstor with HA and duplication. The whole idea was to look at possible DirectIO improvements in ZFS 2.3 which is significant (1.3x - 2x depending on HW) in relevant write scenarios.
ZFS 2.3 does add much more improvements and bugfixes including live add disks to raid-z arrays, but this was not so my focus doing this.

We use CD8P-V-Serie Gen5 NVMes on ZEN 4 platform and DDR5-4800, we now get consistent 10GB/s throughput from diskless nodes and close to 12GB/s on local system.

Since we upgraded all our servers to ZFS 2.3, we built the easy to use ZFS modules (with kernel 6.11.11 and dkms support) as all our servers in production do boot from ZFS as well.

For anybody looking to get this running, again, feel free to use the modules. If anybody looking for help, we are happy to support this, as good as we can in an open source-ish manner!

If you want to deep dive more in benchmarks etc. I think better spin off a new topic and or PM me and we run some tests together ;-)
 
We don't run raid-z in out production environments. We run Linstor with HA and duplication. The whole idea was to look at possible DirectIO improvements in ZFS 2.3 which is significant (1.3x - 2x depending on HW) in relevant write scenarios.
ZFS 2.3 does add much more improvements and bugfixes including live add disks to raid-z arrays, but this was not so my focus doing this.

We use CD8P-V-Serie Gen5 NVMes on ZEN 4 platform and DDR5-4800, we now get consistent 10GB/s throughput from diskless nodes and close to 12GB/s on local system.

Since we upgraded all our servers to ZFS 2.3, we built the easy to use ZFS modules (with kernel 6.11.11 and dkms support) as all our servers in production do boot from ZFS as well.

For anybody looking to get this running, again, feel free to use the modules. If anybody looking for help, we are happy to support this, as good as we can in an open source-ish manner!

If you want to deep dive more in benchmarks etc. I think better spin off a new topic and or PM me and we run some tests together ;-)
I will be happy to start testing on new topic. My hardware is 4x Micron 7400 1,92TB, 2x Micron 7400 3,84 TB and 2x DL380G10 connected by Mellanox card with 100G RDMA.
 
Last edited:
> For anybody looking to get this running, again, feel free to use the modules. If anybody looking for help, we are happy to support this, as good as we can in an open source-ish manner!

I followed the instructions on the webpage but it's not working for me in an rpool-boot VM:

- There is no valid GPG data
- The new file does not get downloaded to /etc/apt/sources.list.d/

Code:
dkms is already the newest version (3.0.10-8+deb12u1).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
gpg: no valid OpenPGP data found.
Hit:1 http://security.debian.org bookworm-security InRelease
Hit:2 http://download.proxmox.com/debian/pve bookworm InRelease                                      
Hit:3 http://ftp.us.debian.org/debian bookworm InRelease                                             
Hit:4 http://ftp.us.debian.org/debian bookworm-updates InRelease               
Ign:5 https://download.webmin.com/download/newkey/repository stable InRelease
Hit:6 https://download.webmin.com/download/newkey/repository stable Release
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
All packages are up to date.
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Package libnvpair3 is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
  libnvpair3linux
Package libuutil3 is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
  libuutil3linux
Package zfs is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package 'zfs' has no installation candidate
E: Unable to locate package libzfs6
E: Package 'libnvpair3' has no installation candidate
E: Package 'libuutil3' has no installation candidate
E: Unable to locate package libzfs6-devel
E: Unable to locate package libzpool6
E: Unable to locate package pam-zfs-key
update-initramfs: Generating /boot/initrd.img-6.8.12-8-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
Copying and configuring kernels on /dev/disk/by-uuid/6D82-EDB6
        Copying kernel and creating boot-entry for 6.8.12-4-pve
        Copying kernel and creating boot-entry for 6.8.12-8-pve

I downloaded the github .zip and tried manually copying the files and stuff but it's saying it doesn't trust the gpg.
Probably just gonna wait for the test release at this point, too much trouble.
 
Hi, this is the reason we made that package and the instructions. If you follow the instructions, it's super easy. Without - can be really tricky, though!

By looking at the above, looks to me like you did not add the repo propperly.
 
Last edited:
Okay, I took the leap. Using @marcellinus' repo I was able to upgrade ZFS to 2.3.0 including my zpool. Everything seems to be working. Next step is adding additional harddisks to the RAIDZ.

I encountered some issues on the way, though, caused by the fact that I had a pending kernel update in my system and somehow neglected to reboot. So my current (uname -r) kernel was still the pre-update one and somehow updating an initrd using `update-initramfs -u` updates the new kernel version. So there was a version clash that I solved by rebooting the server into the newest kernel and basically starting over, using `apt install ... --reinstall` on the ZFS specific repositories.
 
We looked into this and found that if you end up havind updating your kernel and ZFS at the same time, while the running kernel is somewhat older than the one installed with updateing the system, you can:

1) make sure you have headers installed for the new kernel verison
2) have dkms building got the new kernel version

However, it is much more convenient to start with a fully updated, stable and clean booted system.

Let me know if you need any further help on this.