ZFS 2.3.0 has been released, how long until its available?

We have it on our radar, and will soon run some preliminary internal tests.

As always I cannot provide you with a explicit ETA, but it's planned.
thank you for the details.

Is there any place to follow progress, perhaps a bug is tracking this internal tests and eventual release?
 
  • Like
Reactions: Horstt
Hey, how's it going? I'm really looking forward to getting my hands on 2.3 as I have a number of harddisks in my system that I would like to add to my ZFS pool soon. Are there any noteworthy updates on the progress yet?
 
Ah, yeah, homelab. Is it risky? I mean, compiling (i.e., executing above linked scripts) where there is an existing ZFS pool?
 
Everything is risky when you're updating software. Just don't do a ' zpool upgrade ' and have everything backed up, then go forward if you want to accept the risk.
 
Sure.

What does `zpool upgrade` do and why should I avoid it? Can I still use the expand pool feature without it?
 
Use 2.3.0 for a week to determine stability. Do a scrub, move files around, everything you would normally do.

Then create a file-backed test pool and try out the expand feature to get used to the process. (You can also do this in a VM)

Demo scripts here:

https://github.com/kneutron/ansitest/tree/master/ZFS

When you're satisfied that something is not going to break, and IF you're not using ZFS for boot/root, do a ' zpool upgrade ' on the physical pool you want to expand; that will enable the feature.

See ' man zpool-upgrade '
 
Hi

We have developed and intensively tested packaged with the ZFS 2.3 since we needed this in our Deployment :-)

You are welcome to use it:

https://github.com/MEIT-REPO/proxmox-zfs

This is specifically compatible with systems booting from ZFS, which was the most challenging part!

PS/ we followed and tested since the first release with directIO and performance benefits are great!
 
we followed and tested since the first release with directIO and performance benefits are great!
I'd be very curious about your observations about where/when/how much difference this makes. Would you be willing to share what you found? eg,

- what class of device/count show benefit, and at what point does it have a detrimental effect?
- can directio be turned on/off (I know there's a mount switch, but what about zvols?)
- how does it behave for local (host) datasets, qemu, and lxc

Thank you for any insight!
 
In our testing with ZFS 2.3, Linstor on NVMe devices, we've achieved impressive results, particularly when leveraging NVMe-oF (RDMA) over a 100 Gbit ring topology in a three-node cluster, 2x mirror on two nodes 1 and 2, accessed from diskless node 3. Our primary storage configurations include NVMe and SAS SSDs, with the most significant performance gains coming from Gen4 and Gen5 NVMe drives - a result that was expected but still remarkable, up to 2x and reliable constant 100GBit wire speed, even more locally.

Key Takeaways from Our Testing​

  • Diskless Node Access: The real game-changer was accessing storage from a diskless node via NVMe-oF (RDMA), maximizing throughput and minimizing latency.
  • Network Efficiency: Running on a 100 Gbit ring topology ensured ultra-low-latency data access across the cluster and is very affordeable.
  • Stability & Reliability: Since the beginning of our tests with ZFS 2.3-rc1, we have encountered zero issues - a testament to its robustness.
  • Dataset-Level Control: One of ZFS strengths is the ability to control storage behavior dynamically at the dataset level during runtime.

Ready for Production?​

Based on our experience, we confidently recommend using ZFS 2.3 in production environments. The combination of high-speed networking, NVMe-oF, and Gen5 NVMe drives delivers exceptional performance while maintaining stability. However, this is our personal experience, not a paid endorsement - just a genuine and personal recommendation from the field.