Upgrading zfs to 2.2.0-rc1 ?

magnayn

New Member
Oct 20, 2021
10
2
3
49
Has anyone tried to upgrade to the latest bleeding-edge ZFS ?

I get quite far, but I get problems here, when trying to install:

Preparing to unpack libnvpair3_2.2.0-0_amd64.deb ...
Unpacking libnvpair3 (2.2.0-0) ...
Replaced by files in installed package libnvpair3linux (2.1.12-pve1) ...
dpkg: dependency problems prevent configuration of libnvpair3:
libnvpair3linux (2.1.12-pve1) breaks libnvpair3 and is installed.

Suggestions welcome..
 
It's an impossible task.
How do you want to upgrade to zfs 2.2-rc1 without recompiling the kernel?

What i mean is, it's not just packages that you need to update and that's it :)
 
I know it's not just packages you need to update - I'm following the zfs compilation guide, building from the tar.gz (Which isn't, incidentally, a kernel recompile - but it would not be an issue as far as I'm concerned even if it did).
 
I know it's not just packages you need to update - I'm following the zfs compilation guide, building from the tar.gz (Which isn't, incidentally, a kernel recompile - but it would not be an issue as far as I'm concerned even if it did).
Okay, then for your error...
It says just that you need to remove libnvpair3linux before installing your own libnvpair3 2.2.0

But i guess, that is going to remove probably the proxmox-ve metapackage as well and will ask you to uninstall proxmox...
I guess that will happen.

However, proxmox-ve is just a metapackage, but that means also, that you need to manually install everything back again :-(
 
I tryed to compile the default 6.2.16-5 Kernel with zfs2.2 rc2...
And failed even myself.

Code:
git clone https://github.com/proxmox/pve-kernel.git

cd pve-kernel
git submodule set-url submodules/zfsonlinux https://git.proxmox.com/git/zfsonlinux.git
git submodule set-url submodules/ubuntu-kernel https://git.proxmox.com/git/mirror_ubuntu-kernels.git
git submodule update --force --init --remote submodules/zfsonlinux
sed -i 's|git submodule update --init --recursive $(ZFSONLINUX_SUBMODULE)|git submodule update --force --recursive --init --remote $(ZFSONLINUX_SUBMODULE)|' Makefile

cd submodules/zfsonlinux/
git submodule set-url zfs/upstream https://github.com/openzfs/zfs.git
git submodule set-branch --branch zfs-2.2-release zfs/upstream
git submodule update --force --init --recursive --remote upstream

cd ~/pve-kernel
make

It fails in the end to apply zed patches, which is tbh an easy fix, but i have the main issue, that my "upstream" zfs 2.2 gets compiled as an zfs 2.1.12 package.
Tho its indeed actually 2.2 rc2.

The Funniest thing tho, it looks to me like Proxmox uses as upstream zfs 0.8....
I don't even understand how zfs 0.8 gets to zfs 2.1.12 during compilation, it makes no sense, since there is nothing that grabs the sources again from openzfs.
If i couldn't tell it better, it looks to me tbh, like we are using zfs 0.8 that is just versioned as zfs 2.1.12, otherwise i don't get it.

I can't find out where that 2.1.12 versioning comes from, and gaved up now xD

Cheers :)
 
Last edited:
It's a pity as there seem to be some very useful features in 2.2 - but I also drew a blank.

Hopefully it'll get a full release soon and supported packages should start appearing.
 
  • Like
Reactions: Ramalama
It's a pity as there seem to be some very useful features in 2.2
The "killer" feature for proxmox is native container support, which will require some code change in proxmox to make use of. the other benefit, Fully adaptive ARC eviction, I wouldn't touch for at least 3 minor point releases past the major. It rarely pays to be a guineapig.
 
The "killer" feature for proxmox is native container support, which will require some code change in proxmox to make use of. the other benefit, Fully adaptive ARC eviction, I wouldn't touch for at least 3 minor point releases past the major. It rarely pays to be a guineapig.
I would say the same, but i want to use my docker lxc containers finally on zfs :)

Don't want to wait for that forever, every point release takes around 1-3 months.

Till openzfs gets stable, will take until rc8, we are at rc2 at the moment.

That means you are pationed enough to wait another year from now on :)

I am not xD
 
  • Like
Reactions: Davipro and magnayn
I really would like to use ZFS on LXC - even at the risk of it being unstable.
ZFS is already fully functional used as a vzol for lxc. what functionality are you missing that would rationalize "the risk of it being unstable"?

Just because you'd update your zfs stack doesnt mean the lxc toolset is ready yet so its not really an option anyway.
Don't want to wait for that forever, every point release takes around 1-3 months.
Since it doesnt actually offer you anything that you can't do now, whats the hurry?
 
> The Linux container support for OpenZFS 2.2 includes IDMAPPED mounts in the user name-space, OverlayFS support, and Linux namespace delegation support.
 
please explain how you intend to use those functions.
The overlayfs driver that docker uses will use automatically those functions?
The driver tryes to use it already on zfs2.1, but fails and you get errors actually...
 
The overlayfs driver that docker uses will use automatically those functions?
yes, it would (or could if you have the right version docker.) My point is that you don't actually need this with docker on zvol- it does have some benefits, but unless you know what those are for your usecase you're not likely to benefit from them.

Not everything new and shiny is a must have, and new often has rough edges.
I think if you peruse this thread you'll see why people are eager for this.
Its just you and @Ramalama. and neither of you have shown you have any actual use for it.
 
yes, it would (or could if you have the right version docker.) My point is that you don't actually need this with docker on zvol- it does have some benefits, but unless you know what those are for your usecase you're not likely to benefit from them.

Not everything new and shiny is a must have, and new often has rough edges.

Its just you and @Ramalama. and neither of you have shown you have any actual use for it.
I don't even understand what you're trying to archive or talking about.
Other as telling us it's useless to want zfs 2.2?

I don't get what you don't understand.
Atm with zfs 2.2 there are docker images that simply won't work inside lxc containers, if the docker images have files with extremely long names or too many subdirectories inside subdirectories.

Best example in this case is something like Speedtest-tracker, which simply will error out even during image extraction.

While other docker images works fine, like heimdall etc inside unprivileged containers.

That's finally fixed with zfs 2.2, what's there not to understand?
You're slowly starting to nerv me, because i simply don't see any useful content in your posts.

What are you trying to say? Bind mount an zfs directory to /var/lib/docker inside the container or what?

Or do you want to say to create an zfs ext4 dataset to host the storage of the container on it?

Or what? What's the sense of your posts?
 
Maybe i was to harsh, i try it now differently

ZFS 2.2 brings support for:
- RENAME_WHITEOUT (The most important part)
- xattr fixes (second important) (that's already supported, it's just buggy)
- zfs delegation to user namespaces

Basically everything that @magnayn said.

Most important is that RENAME_WHITEOUT will enable native overlay diff support in docker, automatically.

The overlayfs driver ALREADY supports it!!!
That's why you get errors like:
Code:
overlayfs: upper fs does not support RENAME_WHITEOUT.
ON YOUR PROXMOX HOST.......

Or errors like:
Code:
overlayfs: fs on '/var/lib/docker/overlay2/l/HI6FZUS3BDJWMW4MG27LBE2BKU' does not support file handles, falling back to xino=off.
This indicates just that ZFS misses full overlay support.
Thats where all the other fixes come in place.

And the overlay driver is already here and working!
And you don't need any special docker version???
And you don't need any LXC Container support or whatever you said.

All we need ONLY zfs2.2!
No work from proxmox team on lxc or whatever else, just a kernel with zfs 2.2 and the updated zfs utils packages, that get autocompiled during kernel compilation either (usually).
At least it was the case with pve-edge.

Once i have a lot of time, i will continue to compile the zfs kernel with the proxmox kernel source again.
- Its just not that easy, since the pve-kernel is based on openzfs 0.8, i don't understand at the moment where during kernel compilation openzfs 2.1.12 gets grabbed from. At least there is nothing in the Makefile!
- And additionally the Compilation defaults to zfs 2.1.12, no matter what source code was compiled. I didn't camed behind where that version number comes from either.
Once i have time and solved that 2 mysterious questions, there will be nothing in the way to make a small autocompile script, like i did already earlier.
However thats not important.

Important is only that you hopefully finally understand that we don't need anything else as zfs2.2.
And we don't care obvously if its stable in the end or not. No one tells or forces you to use that kernel with upstream openzfs.

Have a good day.
 
Last edited:
  • Like
Reactions: Davipro
zfs 2.2 will be included in our repo once it has been released (it's still at RC3!) and tested internally, like always. if you experiment yourself, please include that information in all threads where you ask for help with problems so that we know not to waste our time debugging broken builds..
 
zfs 2.2 will be included in our repo once it has been released (it's still at RC3!) and tested internally, like always. if you experiment yourself, please include that information in all threads where you ask for help with problems so that we know not to waste our time debugging broken builds..
That's actually an amazing clarification, thx fabian.
Means for me that it's somewhat in the timeframe of 3-5 Months, roughly calculated.

Somewhat openzfs has shorter intervals on 2.2 with rc steps, compared to the timeframe of 2.1, but who knows, it's pretty unpredictable:)
Not like linux kernels at least, where the timeframe between rc steps is mostly one week exactly xD

However, thanks for clarification, atm zfs 2.2 is surely not important for the Proxmox team, because of the kernel 6.2 issues people have, with ksm or iouring, which is hard to debug.

And yes! If we test zfs 2.2 here at some point, no one should report issues anywhere else as in this thread tbh.
If we want to play guinea pigs, we shouldn't steal others time at least:)

Cheers
 
  • Like
Reactions: Davipro
That's actually an amazing clarification, thx fabian.
Means for me that it's somewhat in the timeframe of 3-5 Months, roughly calculated.

Somewhat openzfs has shorter intervals on 2.2 with rc steps, compared to the timeframe of 2.1, but who knows, it's pretty unpredictable:)
Not like linux kernels at least, where the timeframe between rc steps is mostly one week exactly xD

However, thanks for clarification, atm zfs 2.2 is surely not important for the Proxmox team, because of the kernel 6.2 issues people have, with ksm or iouring, which is hard to debug.

And yes! If we test zfs 2.2 here at some point, no one should report issues anywhere else as in this thread tbh.
If we want to play guinea pigs, we shouldn't steal others time at least:)

Cheers
Hello Ramalama and All,

Effectively, ZFS 2.2.0 is an important feature for me for the same reasons (IDMAPPED).
However, now I need to create Docker's containers for a pre-production infrastructure.

I would like to ask a question :
While waiting for the release of ZFS 2.2.0, do you think it is interesting to work today with privileged LXC “pods” (Docker)?
Then, when ZFS 2.2.0 is available in PVE, will it be simple (i.e. without internal changes to the Docker containers) to migrate them to a new unprivileged LXC container in order to go into production ?

I am a beginner on the subject, and thank you in advance for your opinions.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!