Opt-in Linux 6.17 Kernel for Proxmox VE 9 available on test & no-subscription

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
6,710
4,543
315
South Tyrol/Italy
shop.proxmox.com
We recently uploaded the 6.17 kernel to our repositories. The current default kernel for the Proxmox VE 9 series is still 6.14, but 6.17 is now an option.

We plan to use the 6.17 kernel as the new default for the Proxmox VE 9.1 release later in Q4.
This follows our tradition of upgrading the Proxmox VE kernel to match the current Ubuntu version until we reach an Ubuntu LTS release, at which point we will only provide newer kernels as an opt-in option. The 6.17 kernel is based on the Ubuntu 25.10 Questing release.

We have run this kernel on some of our test setups over the last few days without encountering any significant issues. However, for production setups, we strongly recommend either using the 6.14-based kernel or testing on similar hardware/setups before upgrading any production nodes to 6.17.

How to install:
  1. Ensure that either the pve-no-subscription or pvetest repository is set up correctly.
    You can do so via CLI text-editor or using the web UI under Node -> Repositories.
  2. Open a shell as root, e.g. through SSH or using the integrated shell on the web UI.
  3. apt update
  4. apt install proxmox-kernel-6.17
  5. reboot
Future updates to the 6.17 kernel will now be installed automatically when upgrading a node.

Please note:
  • The current 6.14 kernel is still supported, and will stay the default kernel until further notice.
  • There were many changes, for improved hardware support and performance improvements all over the place.
    For a good overview of prominent changes, we recommend checking out the kernel-newbies site for 6.15, 6.16, and 6.17.
  • The kernel is also available on the test and no-subscription repositories of Proxmox Backup Server, Proxmox Mail Gateway, and in the test repo of Proxmox Datacenter Manager.
  • The new 6.17 based opt-in kernel will not be made available for the previous Proxmox VE 8 release series.
  • If you're unsure, we recommend continuing to use the 6.14-based kernel for now.

Feedback about how the new kernel performs in any of your setups is welcome!
Please provide basic details like CPU model, storage types used, ZFS as root file system, and the like, for both positive feedback or if you ran into some issues, where using the opt-in 6.17 kernel seems to be the likely cause.
 
Update tested on:
- Xeon E3-1225 V2 (PBS)
- Xeon Silver 4214R (PVE as a VM in ESXi and PBS as a VM in PVE in ESXi (don’t ask ^^), and baremetal PBS)
- EPYC 7402 (PVE)
- Xeon(R) CPU E5-1620 v2 (PVE)
and everything looks fine until now. Thanks a lot !
 
  • Like
Reactions: t.lamprecht
Installed on a Lenovo P3 Tiny,i5-13500 with an x520 NIC. All working good here!

Edit: Additional details, ~25 LXCs, mostly Debian with a few Ubuntu. iGPU passthrough on several, still working as expected.
 
Last edited:
  • Like
Reactions: t.lamprecht
I didn't spot any issues - neither in dmesg, nor during 5h usage - with 6.17.1-1-pve.
According to my central monitoring all values are within normal range for the cluster.

Systems:
  • EPYC 7402p (Zen2)
  • EPYC 9474F (Zen4)

VMs:
  • mostly OpenBSD
  • Linux
  • Windows 10

Configuration:
  • HA
  • ZFS Pool
  • Backup via Proxmox Backup
 
  • Like
Reactions: t.lamprecht
Nice, edac memory error reporting also now works on Intel 12th-14th gen parts in W680 motherboards with ECC memory.
 
Last edited:
in case you are using additional dkms modules like r8168 you need to install proxmox-headers-6.17 too

so

Bash:
apt install proxmox-kernel-6.17 proxmox-headers-6.17

tested on my smol - 3x Lenovo Tiny M920q Cluster, with i5-8500T/32GB/512 NVMe and second r8168 nic installed in m2 wifi slot (on all 3 machines)

Bash:
$ dkms status
r8168/8.055.00, 6.14.11-4-pve, x86_64: installed
r8168/8.055.00, 6.17.1-1-pve, x86_64: installed

Additionally, 6.17 works fine on PBS 4.0.16 in KVM VM.

Edit: NO ZFS, standard setup with lvm and ext4
 
Last edited:
So far so Good, Intel Arc Pro B50 is now working great with SR-IOV with 6 Virtual Functions (Currently).
AMD: 5950X

VM(s): 12 Total Mostly Linux but a few Windows VMs (Thin-Clients)
ZFS filesytem with 3 different pools including boot.
 
Upgraded and running good on WRX90E + 9955WX + 2TB of ram with SR-IOV. A windows VM and few debian based vm's. Few ZFS pools including boot.
 
Ok, So It seems to work on my machine, and The Kernel panic that I got when I tried patching the Ubuntu kernel 6.17 , and running podman in a container on ZFS that would panic, works with this kernel. But there is still a regression in AppAromr 5.0 that is in this Kernel

This is running olama in a container installed with Proxmox Helper Scripts:
Code:
[   20.388102] audit: type=1400 audit(1760423742.565:288): apparmor="DENIED" operation="sendmsg" class="file" namespace="root//lxc-103_<-var-lib-lxc>" profile="rsyslogd" name="/run/systemd/journal/dev-log" pid=2193 comm="systemd-journal" requested_mask="r" denied_mask="r" fsuid=100000 ouid=100000
[   20.388657] audit: type=1400 audit(1760423742.565:289): apparmor="DENIED" operation="sendmsg" class="file" namespace="root//lxc-103_<-var-lib-lxc>" profile="rsyslogd" name="/run/systemd/journal/dev-log" pid=2193 comm="systemd-journal" requested_mask="r" denied_mask="r" fsuid=100000 ouid=100000
[   20.405604] audit: type=1400 audit(1760423742.584:290): apparmor="DENIED" operation="sendmsg" class="file" namespace="root//lxc-103_<-var-lib-lxc>" profile="rsyslogd" name="/run/systemd/journal/dev-log" pid=2193 comm="systemd-journal" requested_mask="r" denied_mask="r" fsuid=100000 ouid=100000
[   20.407643] audit: type=1400 audit(1760423742.586:291): apparmor="DENIED" operation="sendmsg" class="file" namespace="root//lxc-103_<-var-lib-lxc>" profile="rsyslogd" name="/run/systemd/journal/dev-log" pid=2193 comm="systemd-journal" requested_mask="r" denied_mask="r" fsuid=100000 ouid=100000
[   20.426177] audit: type=1400 audit(1760423742.606:292): apparmor="DENIED" operation="sendmsg" class="file" namespace="root//lxc-103_<-var-lib-lxc>" profile="rsyslogd" name="/run/systemd/journal/dev-log" pid=2193 comm="systemd-journal" requested_mask="r" denied_mask="r" fsuid=100000 ouid=100000
[   20.426227] audit: type=1400 audit(1760423742.606:293): apparmor="DENIED" operation="sendmsg" class="file" namespace="root//lxc-103_<-var-lib-lxc>" profile="rsyslogd" name="/run/systemd/journal/dev-log" pid=2193 comm="systemd-journal" requested_mask="r" denied_mask="r" fsuid=100000 ouid=100000
[   20.426481] audit: type=1400 audit(1760423742.606:294): apparmor="DENIED" operation="sendmsg" class="file" namespace="root//lxc-103_<-var-lib-lxc>" profile="rsyslogd" name="/run/systemd/journal/dev-log" pid=2193 comm="systemd-journal" requested_mask="r" denied_mask="r" fsuid=100000 ouid=100000
[   49.913837] audit: type=1400 audit(1760423772.634:295): apparmor="DENIED" operation="sendmsg" class="file" namespace="root//lxc-103_<-var-lib-lxc>" profile="rsyslogd" name="/run/systemd/journal/dev-log" pid=2193 comm="systemd-journal" requested_mask="r" denied_mask="r" fsuid=100000 ouid=100000
The problem is that it is seeing a socket as a file.

Something like this will fix it.
Diff:
---
 security/apparmor/af_unix.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/security/apparmor/af_unix.c b/security/apparmor/af_unix.c
--- a/security/apparmor/af_unix.c
+++ b/security/apparmor/af_unix.c
@@ -406,12 +406,19 @@ static int profile_peer_perm(struct aa_profile *profile, const char *op,
     if (state) {
         struct aa_profile *peerp;
 
-        if (peer_path)
-            return unix_fs_perm(ad->op, request, ad->subj_cred,
-                        &profile->label, peer_path);
-        else if (path)
-            return unix_fs_perm(ad->op, request, ad->subj_cred,
-                        &profile->label, path);
+        /* Don't use file-based permissions for message passing.
+         * sendmsg/recvmsg should use socket permissions, not file r/w.
+         */
+        if ((peer_path || path) &&
+            strcmp(ad->op, OP_SENDMSG) != 0 && strcmp(ad->op, OP_RECVMSG) != 0) {
+            if (peer_path)
+                return unix_fs_perm(ad->op, request, ad->subj_cred,
+                            &profile->label, peer_path);
+            else if (path)
+                return unix_fs_perm(ad->op, request, ad->subj_cred,
+                            &profile->label, path);
+        }
+        /* For sendmsg/recvmsg, skip fs checks and use socket mediation */
         state = match_to_peer(rules->policy, state, request,
                       unix_sk(sk),
                       peer_addr, peer_addrlen, &p, &ad->info);

The issue isn't major though.
 
Update tested on:
- Xeon E3-1225 V2 (PBS)
- PVE as a VM in ESXi and PBS as a VM in PVE in ESXi (don’t ask ^^ - Xeon(R) Silver 4214R)
- EPYC 7402 (PVE)
- Xeon(R) CPU E5-1620 v2 (PVE)
and everything looks fine until now. Thanks a lot !

EDIT: also tested:
A10-7880E as PVE
Xeon(R) Silver 4214R as PVE (baremetal this time)
EPYC 7402 (PVE, 3 nodes cluster) runs ZFS pool.
regular backups to PBS cited above.