ZFS 2.3.0 has been released, how long until its available?

any chance any of the proxmox staff could give us a very rough ETA of when zfs 2.3/2.3.1 will make it into the test branch? i am a bit hesitant to install the one from the above repo and i would rather not if it is a week away or something from making it into testing. if it is months away i might give it a shot, but id rather not, i don't want to be stuck with a random third party zfs or unknown changes, etc, that may complicate future updates.
 
any chance any of the proxmox staff could give us a very rough ETA of when zfs 2.3/2.3.1 will make it into the test branch? i am a bit hesitant to install the one from the above repo and i would rather not if it is a week away or something from making it into testing. if it is months away i might give it a shot, but id rather not, i don't want to be stuck with a random third party zfs or unknown changes, etc, that may complicate future updates.
If it were me, and my production PVE server, I'd never install a non-Proxmox ZFS on it. Even if it works at first, it might break the next time Proxmox releases a point upgrade (e.g., 8.4.2) with bugfixes that's expecting ZFS 2.2.x and proceeds to go boom.
 
If it were me, and my production PVE server, I'd never install a non-Proxmox ZFS on it. Even if it works at first, it might break the next time Proxmox releases a point upgrade (e.g., 8.4.2) with bugfixes that's expecting ZFS 2.2.x and proceeds to go boom.
that is apart of my concern. while it is just a little homelab server, i would like fast dedup and direct i/o on it as soon as possible, as i am just using a few single disks with VMs and files that fast dedup / direct i/o would be a world of difference on it seems, but i do not want to mess it up either.

plus since its a github repo with the only thing posted to it, requiring python and all these other dependencies, just seems like something i shouldn't be testing on my main server. but it certainly is tempting, not knowing when the official one will be released and all.
 
Take any laying around hardware and give a manual zfs 2.3 a try and some hands on to make future decisions to your regulary pve('s).
Proxmox is still testing the new version(s) and perhaps will be there soon with a further 8.4.* or 8.5 release.
 
  • Like
Reactions: Johannes S
that is apart of my concern. while it is just a little homelab server, i would like fast dedup and direct i/o on it as soon as possible, as i am just using a few single disks with VMs and files that fast dedup / direct i/o would be a world of difference on it seems, but i do not want to mess it up either.

plus since its a github repo with the only thing posted to it, requiring python and all these other dependencies, just seems like something i shouldn't be testing on my main server. but it certainly is tempting, not knowing when the official one will be released and all.
I feel wanting new features, but one of the things I love about Proxmox is, left to itself, it's incredibly stable. Even in my home/home office server environment where I'm teaching myself as I go, Proxmox is solid enough to go for weeks/months without a single issue.

It's reassuring--when something goes wrong, it's because I've tweaked it too hard (usually trying to get PCIe passthrough things to work). It makes it easy to tell when I've done something wrong.

ZFS 2.3 hasn't come to Proxmox yet, and the best explanation for that is that the PVE devs don't think it's ready yet. PVE is complicated. ZFS is complicated. Meshing them together is non-trivial. It's not like just dropping a new desktop environment on top of Linux when you want to try something different.

There'll always be some new feature coming that sounds great. But try to focus on whether your system is actually working well enough to do what you need it to. When the time comes, ZFS 2.3 will drop and you'll get a rock-solid performance boost.
 
Last edited:
Proxmos

I feel wanting new features, but one of the things I love about Proxmox is, left to itself, it's incredibly stable. Even in my home/home office server environment where I'm teaching myself as I go, Proxmox is solid enough to go for weeks/months without a single issue.

It's reassuring--when something goes wrong, it's because I've tweaked it too hard (usually trying to get PCIe passthrough things to work). It makes it easy to tell when I've done something wrong.

ZFS 2.3 hasn't come to Proxmox yet, and the best explanation for that is that the PVE devs don't think it's ready yet. PVE is complicated. ZFS is complicated. Meshing them together is non-trivial. It's not like just dropping a new desktop environment on top of Linux when you want to try something different.

There'll always be some new feature coming that sounds great. But try to focus on whether your system is actually working well enough to do what you need it to. When the time comes, ZFS 2.3 will drop and you'll get a rock-solid performance boost.
i certainly agree there, i appreciate the stability a lot, i have a little tiny pc running my firewall and other things aside from my main proxmox server and i am up to almost 200 days without even so much as a reboot.

yes that is a huge portion of my hesitancy, ZFS is not a simple thing and i thought the PVE version is custom to proxmox and has differences so just installing stock was a bad idea in itself.

i do run into issues currently with ZFS, using single disks it is very slow, stalls out and the few datasets i use dedup on are so very very slow it can take hours to write data that would be done in a few minutes on another drive, it causes a lot of headaches, but i do not want to destroy my setup to try new features, haha
 
Last edited:
i certainly agree there, i appreciate the stability a lot, i have a little tiny pc running my firewall and other things aside from my main proxmox server and i am up to almost 200 days without even so much as a reboot.

yes that is a huge portion of my hesitancy, ZFS is not a simple thing and i thought the PVE version is custom to proxmox and has differences so just installing stock was a bad idea in itself.

i do run into issues currently with ZFS, using single disks it is very slow, stalls out and the few datasets i use dedup on are so very very slow it can take hours to write data that would be done in a few minutes on another drive, it causes a lot of headaches, but i do not want to destroy my setup to try new features, haha
Yes, ZFS and a modified Ubuntu Linux kernel are both custom in Proxmox ... the PVE devs often backport security and other features from newer ZFS and kernel versions. Those upstream kernel backports are the -x revisions to the Proxmox kernel, for example (e.g., 6.8.12-4, -5, etc.).

It definitely sounds like your ZFS setup needs some help. You might also want to ask about optimizing your ZFS setup over on PracticalZFS.com and or a separate thread here. :)
 
Yes, ZFS and a modified Ubuntu Linux kernel are both custom in Proxmox ... the PVE devs often backport security and other features from newer ZFS and kernel versions. Those upstream kernel backports are the -x revisions to the Proxmox kernel, for example (e.g., 6.8.12-4, -5, etc.).

It definitely sounds like your ZFS setup needs some help. You might also want to ask about optimizing your ZFS setup over on PracticalZFS.com and or a separate thread here. :)
yeah, i really would rather not mess witht hat and risk data loss haha

thank you for the suggestion my zfs could use some work, i am just kind of learning as i go, i think it is just the fact im using dedup on those datasets + a single disk, while it is an enterprise HDD that hits 250mb/s, zfs in general just stalls here and there and drops to 2-5mb/s or less plus stalls on dedup enabled datasets, that is where it really is a headache. maybe someone over there could help, ive made a few posts here about it but the consensus was basically zfs is bad on a single disk, its probably buffer bloat, etc. haha
 
yeah, i really would rather not mess witht hat and risk data loss haha

thank you for the suggestion my zfs could use some work, i am just kind of learning as i go, i think it is just the fact im using dedup on those datasets + a single disk, while it is an enterprise HDD that hits 250mb/s, zfs in general just stalls here and there and drops to 2-5mb/s or less plus stalls on dedup enabled datasets, that is where it really is a headache. maybe someone over there could help, ive made a few posts here about it but the consensus was basically zfs is bad on a single disk, its probably buffer bloat, etc. haha
I suspect the deduplication is eating your performance. But I'm not sure.

Good luck!
 
I suspect the deduplication is eating your performance. But I'm not sure.

Good luck!
It certainly is on some of my datasets, but at the same time hard drives are expensive and i only have case space for so many, so i need dedup on those datasets in the end. I am hoping fast dedup and direct i/o solve my problems since they seem to all be around dedup and buffers not working well with my single drive no matter how i configure them, part of why i was asking before if that repo was removable once 2.3-pve is released but probably best to just wait on it, hopefully it wont be too much longer before it hits the test branch at least.

Thank you and thank you for that site suggestion above, i should really see if they can help.
 
Last edited:
Be aware that with an update of zfs 2.3.* with fast-dedupe capability your already deduped (single disk) pool would not see any better as for using the new algorithms you need to enable and rewrite your data onto.
Yes, that wont be fun but i am definitely willing to do it to make the switch the 2.3, i planned to make a new dataset, transfer everything over to it and delete the old one.

I think i have the commands ill need to make sure its enabled down, now to wait for 2.3. Haha, i really hope its at least in the test branch soon, this thread is already a few months old.

Thank you for the heads up too, if i hadnt known i would surely want to and im sure many others do not know and thats a pretty important piece of information.
 
I am yet so see a dataset with a reasonable dedup ratio, where all that hassle really helps more than it hurts.
Care to show your zpool list -v ?
yeah, sure, mine is not too great right now, i have been moving files around and didn't write everything into the dedup datasets because i just didn't want to move too much at 2-5mb/s since its literally terabytes. lol


Code:
NAME                                         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
onetb                                        928G   600G   328G        -         -    54%    64%  1.15x    ONLINE  -
  ata-WDC_WD10JPVX-22JC3T0_WD-WX31A356HN8D   932G   600G   328G        -         -    54%  64.6%      -    ONLINE
twelve                                      10.9T  8.96T  1.95T        -         -    52%    82%  1.05x    ONLINE  -
  wwn-0x5000cca253c2e8ae-part2              10.9T  8.96T  1.95T        -         -    52%  82.2%      -    ONLINE

across both drives its saving me about 340GB, but with what i plan to use it for it will end up saving me a lot more than that i just haven't done it all yet being so slow and the fact i will have to rewrite it all once we get 2.3, kinda waiting on that and not wasting my time on writing like 3TB+ at 2-5mb/s ....
 
A question for people already using fast deduplication: As far I know the previous deduplication feature is the main culprit for the myth that ZFS eats huge amounts of ram and and might have performance problems. Now I already know that the new fast deduplication feature should fix the performance issues but I didn't read anything on it's effect on RAM usage. Can anybody shine some light on it?
 
A question for people already using fast deduplication: As far I know the previous deduplication feature is the main culprit for the myth that ZFS eats huge amounts of ram and and might have performance problems. Now I already know that the new fast deduplication feature should fix the performance issues but I didn't read anything on it's effect on RAM usage. Can anybody shine some light on it?
i am pretty sure from what i have read recent versions have decreased the ram usage of deduplication. it use to be much worse than it is now. not sure about the impact of fast dedup on ram usage of course, but i think it has gotten slightly better there already before fast dedup was introduced.

it would be really nice to see what the ram usage is looking like though, id also like to hear if direct I/O seems to make a noticeable impact on performance/ram usage too, it seems ZFS has a lot of buffer issues that become more and more obvious with fewer disks.

i am currently working with 96GB and i still have to limit it and it does cause some issues in general trying to configure all the timings for flushing buffers, etc around my specific setup to eliminate stalls, etc.
 
  • Like
Reactions: Johannes S
Just reading this looking for information about the ram usage.

https://github.com/openzfs/zfs/discussions/15896

Apparently ram usage has been reduced yet again just not dramatically (but seeing as it was down to something like 1-2GB per TB that is decent enough it seems)

the Fast Dedup feature also implements a feature for dedup table quotas, so you can now set the memory usage quota for the table, preventing it from just eating up all your Ram.

And of course as Waltar said, fast dedup will coexist with the original DDT setups, so you can have fast dedup and dedup on the same pools and it will require copying data to a new dataset to add it to a fast dedup table rather than a standard dedup table.

There seem to be a number of improvements just within the dedup feature that also come with some new ZFS settings to manage.
 
In our testing with ZFS 2.3, Linstor on NVMe devices, we've achieved impressive results, particularly when leveraging NVMe-oF (RDMA) over a 100 Gbit ring topology in a three-node cluster, 2x mirror on two nodes 1 and 2, accessed from diskless node 3. Our primary storage configurations include NVMe and SAS SSDs, with the most significant performance gains coming from Gen4 and Gen5 NVMe drives - a result that was expected but still remarkable, up to 2x and reliable constant 100GBit wire speed, even more locally.

Key Takeaways from Our Testing​

  • Diskless Node Access: The real game-changer was accessing storage from a diskless node via NVMe-oF (RDMA), maximizing throughput and minimizing latency.
  • Network Efficiency: Running on a 100 Gbit ring topology ensured ultra-low-latency data access across the cluster and is very affordeable.
  • Stability & Reliability: Since the beginning of our tests with ZFS 2.3-rc1, we have encountered zero issues - a testament to its robustness.
  • Dataset-Level Control: One of ZFS strengths is the ability to control storage behavior dynamically at the dataset level during runtime.

Ready for Production?​

Based on our experience, we confidently recommend using ZFS 2.3 in production environments. The combination of high-speed networking, NVMe-oF, and Gen5 NVMe drives delivers exceptional performance while maintaining stability. However, this is our personal experience, not a paid endorsement - just a genuine and personal recommendation from the field.
not often that blindly following instructions nets me a win, had already accepted that I'd be redoing my pool from scratch. but this worked perfectly, thank you!
 
not often that blindly following instructions nets me a win, had already accepted that I'd be redoing my pool from scratch. but this worked perfectly, thank you!
how is it working so far for you? notice any differences or anything? and are you trying out fast dedup? im really interested in direct i/o and fast dedup, i really want to upgrade to ZFS 2.3 but i cannot afford the risk of data corruption i have no backups of my current pool, it partly is the backup. lol
 
Hi!

Is it compatible to the PBS? I would use it to increase my RaidZ-Volume...

Have a great day!
Hi, I have briefly tested on PBS, it will work with 6.12 and 6.14 Kernel, just follow the same guide as for PVE. Make sure you have updated everything and a clean reboot (first step in guide).

```
root@pbs:~# dkms status
zfs/2.3.1, 6.14.0-2-pve, x86_64: installed
zfs/2.3.1, 6.8.12-10-pve, x86_64: installed
```

@Devian242 please let me know if you need help on this.

M.