ZFS bug causing kernel panic when accessing snapshots - fix in OpenZFS v2.4.0, is there a simple upgrade?

yourfate

New Member
Jan 2, 2026
2
0
1
Since proxmox upgraded to zfs 2.3.4 I'm having problems with accessing snapshots, it causes kernel panics, making the access hang indefinitely.

Part of my backup strategy is automated backups for container snapshots, but this is broken by this ZFS bug.

The fix: https://github.com/openzfs/zfs/pull/17943

The fix was merged into ZFS main in November, and is part of the OpenZFS v2.4.0 release.

Is there a simple way to upgrade to the newer ZFS (testing repo or something)?

Otherwise, I might have to build my own kernel for a while with newer ZFS. Is there already a good guide on this? I have run linux with custom kernels before, but now with root on ZFS, and not proxmox.

I have submitted a bug report at your bugzilla too: https://bugzilla.proxmox.com/show_bug.cgi?id=7199
 
Last edited:
Also, this fix has been merged into 2.4.0:

https://github.com/openzfs/zfs/issues/16101

This allows to move also ZVOL devices into special device, based on it's volblocksize and special_small_blocks variable. Now the code is not ignoring these blocks belonging to ZVOLs.

Can't wait for 2.4.0 to be available for Proxmox 9.1.4+
Can we have any release dates? :)
 
EDIT: 2026-01-14: Removed me OpenZFS Repo, since the official Proxmox repo has 2.4.0 on it. Updated the instructions to compile current, or past kernels, with instructions on how to update the OpenZFS to 2.4.0 on past kernel versions.

NOTE: This is for Testing proposes only. IT is not official Proxmox, but it is as close to the official way as possible.

Proxmox has Kernel 6.17.9-1-pve in their GIT repo, but it isn't on any of there package repo's, not even the pve-test as of now. So for OpenZFS 2.4.0, you have 2 options
  1. Compile the current released 6.17.4-2-pve and add OpenZFS support. -- This will keep all your current hardware support.
  2. Compile the next version 6.17.9-1-pve, which has some new hardware support and a bunch of bug fixes in the kernel. But the Proxmox team is probably testing, and will release when they deem it stable.

So to make the compile proxmox kernel, you will need to disable secure boot in your bios, as the signing keys for the kernel are locked down, so this does not compile the signed kernel.


Step 1:
Make a Debian 13 container with 50Gig of storage, at least 4 gigs of ram, and 2 gigs of swap, and give it as many cores as you can handle, as it will use all of them.

Then on Proxmox, I recommend that you make a shared folder, and then map it to your container. Change CONTAINERID to match your container number:
Bash:
mkdir -p /shared/deb
chmod 777 /shared/deb
CONTAINERID=121
pct set $CONTAINERID -mp0 /shared/deb,mp=/shared/deb

Then ssh into your Debian 13 container, or do a pct enter $CONTAINERID.

Step 2:
Setup the correct Repositories:
Bash:
cat > /etc/apt/sources.list.d/debian.sources << 'EOL'
Types: deb deb-src
URIs: http://deb.debian.org/debian
Suites: trixie trixie-updates
Components: contrib main
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg

Types: deb
URIs: http://security.debian.org
Suites: trixie-security
Components: contrib main
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg
EOL

cat > /etc/apt/sources.list.d/proxmox.sources << 'EOL'
Types: deb
URIs: http://download.proxmox.com/debian/pve
Suites: trixie
Components: pve-no-subscription
Signed-By: /usr/share/keyrings/proxmox-archive-keyring.gpg

Types: deb
URIs: http://download.proxmox.com/debian/pve
Suites: trixie
Components: pve-test
Signed-By: /usr/share/keyrings/proxmox-archive-keyring.gpg
EOL

wget https://enterprise.proxmox.com/debian/proxmox-archive-keyring-trixie.gpg -O /usr/share/keyrings/proxmox-archive-keyring.gpg

Step 3:
Update and install needed tools:
Bash:
apt update && apt upgrade
apt install devscripts sudo

Clone the Proxmox Kernel (Latest):
Bash:
git clone git://git.proxmox.com/git/pve-kernel.git
cd pve-kernel

Step 4:
If you are compiling the latest version of the Kernel on the Proxmox GIT (Option 2), Skip this step!!

If you are compiling an older version of the kernel, then run this:
Bash:
#  Pick one of these for an older version of the pve Kernel. You can browse the commits for other kernel versions.
git checkout f69902f1a94f47ba64146751939ce82f0efff077  # For 6.14.11-3-pve
git chechout fba0ef932bdb87a2acc5ffb0466bcf0ce81bf98a  # For 6.17.4-1-pve
git checkout 6e6197515ceed69bdbaf63a004acd70531a6665d  # For 6.17.4-2-pve

# This updated the OpenZFS to 2.4.0 for kernels older than 6.17.9-1-pve.
git submodule update --init submodules/zfsonlinux
cd submodules/zfsonlinux/
git checkout 58b05906972f7f89967c8b6556c15353edda0991
cd ../../
git add -A

Step 5:
Now we install the Dependencies:
Bash:
mk-build-deps -ir submodules/zfsonlinux/debian/control
git submodule update --init --recursive
make build-dir-fresh
mk-build-deps -ir proxmox-kernel-*/debian/control

Step 6:
Now we build the Kernel and move the deb's to the shared folder after it finishes. You may want to grab something to eat, this will take a while.
Code:
make deb && mv *.deb /shared/deb/

Step 7:
Now we need to compile the OpenZFS tools to update them, and move the debs.
Bash:
cd submodules/zfsonlinux/
make deb && mv *.deb /shared/deb/
rm /shared/deb/*dbgsym*

Step 8:
Install the Kernel and OpenZFS. On your Proxmox Server, run the following to update the Kernel and openzfs:
Code:
dpkg --remove --force-depends libzfs6linux
dpkg --remove --force-depends libzpool6linux
apt install /shared/deb/{libnvpair3linux,libzfs7linux,libzpool7linux,libuutil3linux,zfs-initramfs,zfs-zed,zfsutils-linux}*.deb /shared/deb/proxmox-{headers,kernel}-6.17.*-pve_6.17.*_amd64.deb

# If you compiled a specific kernel, you will want to hold it here, or apt will want to update it with the official version that does not have OpenZFS 2.4.0 in it. Otherwise, skip this, and when the current curnel is puclicly released, it will update to their compiled version.
apt-mark hold proxmox-(headers,kernel)-[Current Version here]

Step 9: Finished!
Reboot! You will have a working OpenZFS 2.4.0. You should see this when you run zfs --version, or zpool --version:
Bash:
root@proxmox:~# zfs --version
zfs-2.4.0-pve1
zfs-kmod-2.4.0-pve1
root@proxmox:~# zpool --version
zfs-2.4.0-pve1
zfs-kmod-2.4.0-pve1
 
Last edited:
Well, My post has already been made obsolete in less than 24 hours!!!! The latest update for 6.17.9-1-pve on Proxmox GIT has OpenZFS 2.4.0 on it now! Just skip the step of replacing the OpenZFS part. It should be in the Testing repo soon! But for now, you can compile it yourself and have the official ZFS 2.4.0 support!
 
  • Like
Reactions: twinsen
Well, My post has already been made obsolete in less than 24 hours!!!! The latest update for 6.17.9-1-pve on Proxmox GIT has OpenZFS 2.4.0 on it now! Just skip the step of replacing the OpenZFS part. It should be in the Testing repo soon! But for now, you can compile it yourself and have the official ZFS 2.4.0 support!

Hmm, I don't really want to switch to testing on a production machine tho.

Is there a sane, clean, reversible way to install just that kernel + the userspace tools that come with 2.4.0?
 
Hmm, I don't really want to switch to testing on a production machine tho.

Is there a sane, clean, reversible way to install just that kernel + the userspace tools that come with 2.4.0?
You don't need to change to a pve-test repo, Just compile the Kernel you want. I updated the instructions for the yet to be released kernel with 2.4.0, or roll back the pve-kernel to a commit with a previous commit for the kernel you want, and update the commit of the official OpenZFS from Proxmox git repo to get OpenZFS 2.4.0.

EDIT: Just tested it with 6.14.11-3. Here is some proof:
Code:
root@pve:~# zfs --version
zfs-2.4.0-pve1
zfs-kmod-2.4.0-pve1
root@pve:~# uname -a
Linux pve 6.14.11-3-pve #1 SMP PREEMPT_DYNAMIC PMX 6.14.11-3 (2025-09-22T10:13Z) x86_64 GNU/Linux
root@pve:~# zpool --version
zfs-2.4.0-pve1
zfs-kmod-2.4.0-pve1
root@pve:~#
 
Last edited: