Opt-in Linux 6.17 Kernel for Proxmox VE 9 available on test & no-subscription

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
6,710
4,542
315
South Tyrol/Italy
shop.proxmox.com
We recently uploaded the 6.17 kernel to our repositories. The current default kernel for the Proxmox VE 9 series is still 6.14, but 6.17 is now an option.

We plan to use the 6.17 kernel as the new default for the Proxmox VE 9.1 release later in Q4.
This follows our tradition of upgrading the Proxmox VE kernel to match the current Ubuntu version until we reach an Ubuntu LTS release, at which point we will only provide newer kernels as an opt-in option. The 6.17 kernel is based on the Ubuntu 25.10 Questing release.

We have run this kernel on some of our test setups over the last few days without encountering any significant issues. However, for production setups, we strongly recommend either using the 6.14-based kernel or testing on similar hardware/setups before upgrading any production nodes to 6.17.

How to install:
  1. Ensure that either the pve-no-subscription or pvetest repository is set up correctly.
    You can do so via CLI text-editor or using the web UI under Node -> Repositories.
  2. Open a shell as root, e.g. through SSH or using the integrated shell on the web UI.
  3. apt update
  4. apt install proxmox-kernel-6.17
  5. reboot
Future updates to the 6.17 kernel will now be installed automatically when upgrading a node.

Please note:
  • The current 6.14 kernel is still supported, and will stay the default kernel until further notice.
  • There were many changes, for improved hardware support and performance improvements all over the place.
    For a good overview of prominent changes, we recommend checking out the kernel-newbies site for 6.15, 6.16, and 6.17.
  • The kernel is also available on the test and no-subscription repositories of Proxmox Backup Server, Proxmox Mail Gateway, and in the test repo of Proxmox Datacenter Manager.
  • The new 6.17 based opt-in kernel will not be made available for the previous Proxmox VE 8 release series.
  • If you're unsure, we recommend continuing to use the 6.14-based kernel for now.

Feedback about how the new kernel performs in any of your setups is welcome!
Please provide basic details like CPU model, storage types used, ZFS as root file system, and the like, for both positive feedback or if you ran into some issues, where using the opt-in 6.17 kernel seems to be the likely cause.
 
Update tested on:
- Xeon E3-1225 V2 (PBS)
- Xeon Silver 4214R (PVE as a VM in ESXi and PBS as a VM in PVE in ESXi (don’t ask ^^), and baremetal PBS)
- EPYC 7402 (PVE)
- Xeon(R) CPU E5-1620 v2 (PVE)
and everything looks fine until now. Thanks a lot !
 
  • Like
Reactions: t.lamprecht
Installed on a Lenovo P3 Tiny,i5-13500 with an x520 NIC. All working good here!

Edit: Additional details, ~25 LXCs, mostly Debian with a few Ubuntu. iGPU passthrough on several, still working as expected.
 
Last edited:
  • Like
Reactions: t.lamprecht
I didn't spot any issues - neither in dmesg, nor during 5h usage - with 6.17.1-1-pve.
According to my central monitoring all values are within normal range for the cluster.

Systems:
  • EPYC 7402p (Zen2)
  • EPYC 9474F (Zen4)

VMs:
  • mostly OpenBSD
  • Linux
  • Windows 10

Configuration:
  • HA
  • ZFS Pool
  • Backup via Proxmox Backup
 
  • Like
Reactions: t.lamprecht
Nice, edac memory error reporting also now works on Intel 12th-14th gen parts in W680 motherboards with ECC memory.
 
Last edited:
in case you are using additional dkms modules like r8168 you need to install proxmox-headers-6.17 too

so

Bash:
apt install proxmox-kernel-6.17 proxmox-headers-6.17

tested on my smol - 3x Lenovo Tiny M920q Cluster, with i5-8500T/32GB/512 NVMe and second r8168 nic installed in m2 wifi slot (on all 3 machines)

Bash:
$ dkms status
r8168/8.055.00, 6.14.11-4-pve, x86_64: installed
r8168/8.055.00, 6.17.1-1-pve, x86_64: installed

Additionally, 6.17 works fine on PBS 4.0.16 in KVM VM.

Edit: NO ZFS, standard setup with lvm and ext4
 
Last edited:
So far so Good, Intel Arc Pro B50 is now working great with SR-IOV with 6 Virtual Functions (Currently).
AMD: 5950X

VM(s): 12 Total Mostly Linux but a few Windows VMs (Thin-Clients)
ZFS filesytem with 3 different pools including boot.
 
Upgraded and running good on WRX90E + 9955WX + 2TB of ram with SR-IOV. A windows VM and few debian based vm's. Few ZFS pools including boot.