zfs 2.3.0

plato79

Member
Nov 24, 2020
25
9
23
46
Well,

It's released today and it brings with it a lot of good features:
  • RAIDZ Expansion (#15022): Add new devices to an existing RAIDZ pool, increasing storage capacity without downtime.
  • Fast Dedup (#15896): A major performance upgrade to the original OpenZFS deduplication functionality.
  • Direct IO (#10018): Allows bypassing the ARC for reads/writes, improving performance in scenarios like NVMe devices where caching may hinder efficiency.
  • JSON (#16217): Optional JSON output for the most used commands.
  • Long names (#15921): Support for file and directory names up to 1023 characters.
  • Bug Fixes: A series of critical bug fixes addressing issues reported in previous versions.
  • Numerous performance improvements throughout the code base.
Hope it's implemented in Proxmox soon.
 
Still not implemented:
* User happy secure pool import (without cannot...) after power failures.
* Better iops handing for raidz/draid configs for use cases like hypervisor image stores (without the need to config only useful mirrors).
* Implementation of consumer friendly disk handling so that pve doesn't eat them in zfs root installations.
 
Well,

It's released today and it brings with it a lot of good features:
  • RAIDZ Expansion (#15022): Add new devices to an existing RAIDZ pool, increasing storage capacity without downtime.
  • Fast Dedup (#15896): A major performance upgrade to the original OpenZFS deduplication functionality.
  • Direct IO (#10018): Allows bypassing the ARC for reads/writes, improving performance in scenarios like NVMe devices where caching may hinder efficiency.
  • JSON (#16217): Optional JSON output for the most used commands.
  • Long names (#15921): Support for file and directory names up to 1023 characters.
  • Bug Fixes: A series of critical bug fixes addressing issues reported in previous versions.
  • Numerous performance improvements throughout the code base.
Hope it's implemented in Proxmox soon.
I'm really excited to see how Proxmox integrates the Direct I/O stuff.

I assume (with little practical experience and less understanding than I'd like) that Direct IO should passively improve the performance of guests stored on NVME pools without much/any tuning needed from the user, aside from just turning it on.
 
  • Like
Reactions: shavenne
Does anyone know where the upgrade is on the roadmap of Proxmox? I'm hesitant installing it manually as it is such a core component. I could make good use of the RAIDZ expansion feature.
 
  • Like
Reactions: nett_hier
The Proxmox devs are usually pretty good about incorporating the newest openzfs into the stack. It may already be in the test repo; in any case it should be relatively soon.

I could make good use of the RAIDZ expansion feature.
I wouldn't if I were you. this is a version 1.0 release for this feature. let others get bit by dragons.
 
  • Like
Reactions: Johannes S
The Proxmox devs are usually pretty good about incorporating the newest openzfs into the stack. It may already be in the test repo; in any case it should be relatively soon.


I wouldn't if I were you. this is a version 1.0 release for this feature. let others get bit by dragons.
It's a home lab, no production environment. All there is on the RAIDZ is, let us say, stuff I can get back, one way or another if it is destroyed. It would be a bunch of work, but none of it is really irretrievable. I just have two spare 1.2TB HDDs that I would like to add to an existing 3 disk RAIDZ.
 
As much as I think that @alexskysilk is joking, this is not a horrible suggestion. Building the kernel and newer ZFS modules from sources isn't hugely difficult. It's usually just tedious. Could be a good weekend project, but by the time its done, it's likely a bit anti-climactic. You are now stuck maintaining your own private branch, until the official Proxmox kernel catches up. And in the meantime, most of the improvements are going to be only incremental instead of giving you ground-breakingly new improvements.

Personally, I have better things to do with my time and can wait for a few weeks. But if I had time to burn and I wanted to tinker, then this isn't the worst idea.
 
  • Like
Reactions: Johannes S
As much as I think that @alexskysilk is joking, this is not a horrible suggestion. Building the kernel and newer ZFS modules from sources isn't hugely difficult. It's usually just tedious. Could be a good weekend project, but by the time its done, it's likely a bit anti-climactic. You are now stuck maintaining your own private branch, until the official Proxmox kernel catches up. And in the meantime, most of the improvements are going to be only incremental instead of giving you ground-breakingly new improvements.

Personally, I have better things to do with my time and can wait for a few weeks. But if I had time to burn and I wanted to tinker, then this isn't the worst idea.
Right, that's why I'm asking where it is on the roadmap, if we're talking days, weeks, months... At the moment the two additional HDDs lie idle and I hesitate using them for anything because I know I want to add them to the zpool once the feature is available.
 
Right, that's why I'm asking where it is on the roadmap, if we're talking days, weeks, months... At the moment the two additional HDDs lie idle and I hesitate using them for anything because I know I want to add them to the zpool once the feature is available.
Proxmox uses a custom Ubuntu kernel, the Debian repos with, again, some customized user land packages, and, I think, a version of ZFS with manually integrated patches from upstream (so even if Proxmox has an older version of ZFS, it might have security fixes from newer versions of ZFS).

They just upgraded ZFS 2.2.y to the latest .y release … a few weeks ago? ZFS 2.3 was only officially released on January 13, about 1.5 months ago. The PVE dev team tends to need a bit more time than that for a non-trivial upgrade of a critical system component.

There's no supported mechanism for dropping a vanilla ZFS update into a Proxmox install. I would never try to do it on a system where I had any data I cared about.

You might want to ask if it's planned for Proxmox 8.3 over here: https://forum.proxmox.com/threads/proxmox-ve-8-3-released.157793/page-5
 
Last edited:
  • Like
Reactions: Johannes S
I'm also excited to see how DirectIO will perform in high-performance NVMe environments, especially with PCIe 5.0 enterprise disks like those from Kioxia. Given the potential for faster data throughput and lower latency, I’m curious about its impact on demanding I/O workloads. We all know that ZFS isn’t primarily a performance-oriented filesystem, so any potential gains will definitely be welcome.
 
I'm guessing this will be a Proxmox 9 feature, just based on how much new storage plumbing would be needed to implement all the new ZFS 2.3 storage features.
It takes zero plumbing to update ZFS now. The relevant functionality can be utilized from the CLI (which is where you should be managing ZFS). Future upgrades of plumbing in the GUI shouldn’t delay introduction of the functionality.
 
Last edited:
It takes zero plumbing to update ZFS now. The relevant functionality can be utilized from the CLI (which is where you should be managing ZFS). Future upgrades of plumbing in the GUI shouldn’t delay introduction of the functionality.
Consider also the changes to the documentation to cover new ZFS features like DirectIO for NVME, and changes to the UI to adjust, e.g., DirectIO settings when you create a new ZFS storage in Proxmox. Or perhaps you want to adjust whether specific VMs use DirectIO or not? Is that even a thing? I have no idea.

And that's not considering, again, that Proxmox itself does some custom things with how it implements and uses ZFS. They can't just drop vanilla ZFS updates in without testing them for stability and function. At that point, it's arguably wasted effort, because they'll need to develop, test, and deploy their customized ZFS implementation eventually.

TrueNAS also uses a custom version of ZFS that means you can't just drop the latest vanilla ZFS release into it, though they go about it in a very different way.
 
Consider also the changes to the documentation to cover new ZFS features like DirectIO for NVME, and changes to the UI to adjust, e.g., DirectIO settings when you create a new ZFS storage in Proxmox. Or perhaps you want to adjust whether specific VMs use DirectIO or not? Is that even a thing? I have no idea.

And that's not considering, again, that Proxmox itself does some custom things with how it implements and uses ZFS. They can't just drop vanilla ZFS updates in without testing them for stability and function. At that point, it's arguably wasted effort, because they'll need to develop, test, and deploy their customized ZFS implementation eventually.

TrueNAS also uses a custom version of ZFS that means you can't just drop the latest vanilla ZFS release into it, though they go about it in a very different way.
I respectfully disagree. First, none of those changes need to be addressed in the GUI or in manuals to simply update the build to 2.3.0 or 2.3.1. There is no reason functionality can’t be added to the GUI, and documented, in later releases.

Further they can just drop in code without testing extensively. Assuming the kernel and modules compile, there is an entire repository aptly named “testing”. ZFS 2.3 hasn’t even made its way to the testing repository.
 
Further they can just drop in code without testing extensively. Assuming the kernel and modules compile, there is an entire repository aptly named “testing”. ZFS 2.3 hasn’t even made its way to the testing repository.

Well I respectfully disagree ;) In the end a hypervisor and a file system are Infrastructrue code per excellence, nobody likes an upgrade which leads to data loss. Although the chance might be quite small and there is the "test"-repository in my book the conserative approach of Proxmox developers is a good thing. I can live without new features for a while, I can't live with data loss. So I agree with @SInisterPisces And it's not that there won't be a new ZFS at some point I guess the PVE-Developers just want to get an idea of the changes and potential problems before even bringing them to the test repository. There is nothing bad with being "better safe than sorry", especially if we talk about a file system.
 
  • Like
Reactions: waltar and UdoB