Opt-in Linux 6.17 Kernel for Proxmox VE 9 available on test & no-subscription

Just spent 8 hours dealing with a failed 8 to 9 upgrade, and then a 9.1 clean reinstall that would just not boot, no matter what I did. Maybe related to the posts above on the iommu/etc changes -- I did try those, but still no look. Right after bootloader, I get a black screen and nothing else. I ended up downgrading to 8.4 and restoring from backups / I copied off the VM data since it was only the rootfs that was hosed.

Using the 9.1 ISO:
  1. Upgrade process resulted in un-bootable system.
    1. Unlike most other posts, the bootloader was not clobbered / it worked just fine -- kernel was just dying on boot before it could even output to stdout.
  2. Rescue Boot from install ISO does not work (can't find rpool / use ZFS), work around to use installer w/ debug mode to get a shell.
    1. Backed up VM data / ZFS volumes using installer above Live.
  3. Clean install of Proxmox 9.1 from ISO via Ventoy resulted in unbootable system, using ext4, XFS and ZFS rootfs options.
  4. Stripped machine down to least HW (removed NVMe, all PCIe add-in cards, all but one SSD) to see if there was any effect -- no luck.
  5. Toggled BIOS SR-IOV settings, no effect.
  6. Attempted to mount install and update kernel cmdline + module config to try iommu / huge_pages settings per above, no luck.
  7. Gave up and reinstalled using Proxmox 8.2 ISO from Ventoy, no issues.
  8. Able to restore lxc backups from archive + zfs send/recv quite quickly!
Functional install on Proxmox 8 is at:

Linux myrkr 6.8.12-17-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-17 (2025-11-21T11:16Z) x86_64 GNU/Linux

Server is a X11SSH-CF, so Kaby Lake-era, 1285v6. I use PCI Passthrough for my firewall (which made this outage more difficult...) VM but nothing else. As that functionality is critical to me, I'll keep an eye on bug reports / how people do with passthrough on the newer kernels on legacy hardware.

The 8 to 9 upgrade worked without issue on my Lenovo USFF box, a Rocket Lake CPU (i9-11900T). Both machines ZFS root pools. Functional install on a newer box, after 8 to 9 is:

Linux dagobah 6.17.4-1-pve #1 SMP PREEMPT_DYNAMIC PMX 6.17.4-1 (2025-12-03T15:42Z) x86_64 GNU/Linux

A few hours before this, I had installed vanilla Debian 13 onto another machine (X11SSL-CF, so same generation chipset/CPU) without any issues to use as a NAS. That is kernel:

Linux hoth 6.12.57+deb13-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.12.57-1 (2025-11-05) x86_64 GNU/Linux

I would stay far away from 9.x, or at least the bundled 6.17.x kernel if you're on Skylake-vintage Intel hardware -- something goofy is up. If I get patience back I may try to update again but pin the kernel to 6.8.12 if that's possible, or maybe 6.12.x.
 
Last edited:
So picked up a minisforum MS-01 this month to setup a virtualized opnsense instance to replace my old router (a HP t740 TC). It had been working well until today. I needed to reboot the MS-01 and it didnt boot, hooked up a screen to see superblock errors. Tried repairs, nothing worked, so rebooted into the previous kernel 6.17.2-1-pv and it booted. The kernel giving me grief was 6.17.2-2-pv.

Just wanted to share the info.
 
It seems like only ZFS is affected?

I seen weird superblock errors on replicated volumes.
And simply deleted replication and recreated again (so that the volumes get cleanly created again) and that fixed my issues.

But for my part, i thought that this is simply some weird bug and that i probably had it for longer and never seen...
However, for me its fixed with recreating replication and didnt happened again since.

And it wasnt all replicated volumes, just some older windows 2008/2012 ones.