ZFS 2.3.0 has been released, how long until its available?

how is it working so far for you? notice any differences or anything? and are you trying out fast dedup? im really interested in direct i/o and fast dedup, i really want to upgrade to ZFS 2.3 but i cannot afford the risk of data corruption i have no backups of my current pool, it partly is the backup. lol
So far we have upgraded more tahn 70 instances using our repo and also upgraded all previously 2.3.0 to 2.3.1. Also we have made it compatible with PVE 8.4 and Kernel 6.14. About 20 instances already running PVE 8.4 + Kernel 6.14.

We are happy to help if anyone need support on this repo.

M.
 
  • Like
Reactions: waltar and UdoB
Come on already Proxmox devs, people be getting ready to riot over ZFS 2.3.1 not even landing in Testing yet. It should not take more than 1 business quarter to implement a feature upgrade like this when it's eagerly anticipated. Drop it on us already!
 
  • Like
Reactions: zenowl77
I could bet on it zfs 2.3.2 would be released this week ... maybe that's any next time the first 2.3 edition for pve ... :)
 
Come on already Proxmox devs, people be getting ready to riot over ZFS 2.3.1 not even landing in Testing yet. It should not take more than 1 business quarter to implement a feature upgrade like this when it's eagerly anticipated. Drop it on us already!
right, we are all impatiently waiting. i keep checking the updates daily, sometimes multiple times, would be nice to see in the test branch even if it isn't certain for production yet. i am still very tempted to install the ZFS from this repo. haha, if it takes much longer i will probably do it.
I could bet on it zfs 2.3.2 would be released this week ... maybe that's any next time the first 2.3 edition for pve ... :)
hopefully, haha. we totally need those features, it will solve a lot of problems and probably have a lot fewer people in the forum complaining about zfs issues with it being slow, i know ive contributed to that trying to fix it... haha
 
hopefully, haha. we totally need those features, it will solve a lot of problems and probably have a lot fewer people in the forum complaining about zfs issues with it being slow, i know ive contributed to that trying to fix it... haha
Haha, yeah, we will see ... interesting about any perf boosts to "normal" zfs usage yet (hopefully some will report improvements) as I don't think there are many guys and girls who ie use deduped zvols today and switch to fast-dedupe ... And it brings new problems too like zraid expansion which takes very long which ie was missing problem/feature until now ... :)
 
  • Like
Reactions: Johannes S
Haha, yeah, we will see ... interesting about any perf boosts to "normal" zfs usage yet (hopefully some will report improvements) as I don't think there are many guys and girls who ie use deduped zvols today and switch to fast-dedupe ... And it brings new problems too like zraid expansion which takes very long which ie was missing problem/feature until now ... :)
true many avoid dedup because its so painfully slow and not worth it usually, but more probably will try fast dedup from what ive seen on benchmarks the performance loss is very minimal compared tot he old method which almost completely destroys all performance. but i think direct I/O will also really help for many, especially for nvme users, (i am hoping it helps with the buffer issues and stall outs i get when writing a large amount of files directly to the drive), for me at least i think if dedup didnt have such an impact and stall outs stopped zfs would be far more usable. i could definitely see zraid expansion being potentially problematic and also take forever to test properly, seems like a really great/needed feature though, even if it comes with some bugs at first.
 
  • Like
Reactions: Johannes S
If y'all are gonna riot over ZFS, you all have more energy than me by an order of magnitude.

Please share your energizing secrets.
definitely just a joke, we're i am sure, all quite happy with and thankful for, the wonderful work the proxmox devs do for us all, we are just excited for the new shiny, so to speak, haha.

but my energizing secret is practicing various forms of meditation, it's a lifesaver.
 
Maybe ... the new faster direct I/O will going to make cheap consumer ssd's/nvme's nearly complete unuseably while toasting these instantly then ... ??
For fast-dedupe you need more mem+cpu resources which are hold gladly small on pve's as wanted for the vm's which than accidently could go into opposite than expected if not provided ... try and test extensively :)
 
  • Like
Reactions: Johannes S
Maybe ... the new faster direct I/O will going to make cheap consumer ssd's/nvme's nearly complete unuseably while toasting these instantly then ... ??
For fast-dedupe you need more mem+cpu resources which are hold gladly small on pve's as wanted for the vm's which than accidently could go into opposite than expected if not provided ... try and test extensively :)
oh yeah i wouldnt use it on consumer NVMEs for sure, its apart of why i still am not using it as a boot disk, mine would be toast, but its holding up well as an LVM-Thin volume, only at 8%, but i have removed all swap files from it and tried to minimize logging to it, and it was at 6% i believe when i installed proxmox and ive used a number of vms on it for well over a year now. so pretty good at least.

you are right there for sure, its a delicate balance, thankfully theres a quota limit for the DDT on the new fast dedup that will help. i limit arc, etc, already to give as much to VMS as possible, thankfully im working with 96GB but with trying to use AIs i need 24-48GB+ often in VMs haha kind of hard to balance out with ZFS at times, i have for sure suffered the results of ZFS ram consumption being at war with my VMs.
 
Hi, I have briefly tested on PBS, it will work with 6.12 and 6.14 Kernel, just follow the same guide as for PVE. Make sure you have updated everything and a clean reboot (first step in guide).

```
root@pbs:~# dkms status
zfs/2.3.1, 6.14.0-2-pve, x86_64: installed
zfs/2.3.1, 6.8.12-10-pve, x86_64: installed
```

@Devian242 please let me know if you need help on this.

M.
Hi!

I just upgraded my PBS and actually it's increasing my pool.. Until now it works like a charm :)

Thanks for this!
 
You know, I would never ever use this in any of my PVE systems until it is officially released (I simply love stability), but thank you all with more courage than me to test it before I ever think of using it. Real world benchmarks with same hardware and PVE workloads would be helpful for everyone to get a glimpse on how helpful ZFS 2.3.x could be.