Opt-in Linux 6.17 Kernel for Proxmox VE 9 available on test & no-subscription

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
6,712
4,546
315
South Tyrol/Italy
shop.proxmox.com
We recently uploaded the 6.17 kernel to our repositories. The current default kernel for the Proxmox VE 9 series is still 6.14, but 6.17 is now an option.

We plan to use the 6.17 kernel as the new default for the Proxmox VE 9.1 release later in Q4.
This follows our tradition of upgrading the Proxmox VE kernel to match the current Ubuntu version until we reach an Ubuntu LTS release, at which point we will only provide newer kernels as an opt-in option. The 6.17 kernel is based on the Ubuntu 25.10 Questing release.

We have run this kernel on some of our test setups over the last few days without encountering any significant issues. However, for production setups, we strongly recommend either using the 6.14-based kernel or testing on similar hardware/setups before upgrading any production nodes to 6.17.

How to install:
  1. Ensure that either the pve-no-subscription or pvetest repository is set up correctly.
    You can do so via CLI text-editor or using the web UI under Node -> Repositories.
  2. Open a shell as root, e.g. through SSH or using the integrated shell on the web UI.
  3. apt update
  4. apt install proxmox-kernel-6.17
  5. reboot
Future updates to the 6.17 kernel will now be installed automatically when upgrading a node.

Please note:
  • The current 6.14 kernel is still supported, and will stay the default kernel until further notice.
  • There were many changes, for improved hardware support and performance improvements all over the place.
    For a good overview of prominent changes, we recommend checking out the kernel-newbies site for 6.15, 6.16, and 6.17.
  • The kernel is also available on the test and no-subscription repositories of Proxmox Backup Server, Proxmox Mail Gateway, and in the test repo of Proxmox Datacenter Manager.
  • The new 6.17 based opt-in kernel will not be made available for the previous Proxmox VE 8 release series.
  • If you're unsure, we recommend continuing to use the 6.14-based kernel for now.

Feedback about how the new kernel performs in any of your setups is welcome!
Please provide basic details like CPU model, storage types used, ZFS as root file system, and the like, for both positive feedback or if you ran into some issues, where using the opt-in 6.17 kernel seems to be the likely cause.
 
Update tested on:
- Xeon E3-1225 V2 (PBS)
- Xeon Silver 4214R (PVE as a VM in ESXi and PBS as a VM in PVE in ESXi (don’t ask ^^), and baremetal PBS)
- EPYC 7402 (PVE)
- Xeon(R) CPU E5-1620 v2 (PVE)
and everything looks fine until now. Thanks a lot !
 
  • Like
Reactions: t.lamprecht
Installed on a Lenovo P3 Tiny,i5-13500 with an x520 NIC. All working good here!

Edit: Additional details, ~25 LXCs, mostly Debian with a few Ubuntu. iGPU passthrough on several, still working as expected.
 
Last edited:
  • Like
Reactions: t.lamprecht
I didn't spot any issues - neither in dmesg, nor during 5h usage - with 6.17.1-1-pve.
According to my central monitoring all values are within normal range for the cluster.

Systems:
  • EPYC 7402p (Zen2)
  • EPYC 9474F (Zen4)

VMs:
  • mostly OpenBSD
  • Linux
  • Windows 10

Configuration:
  • HA
  • ZFS Pool
  • Backup via Proxmox Backup
 
  • Like
Reactions: t.lamprecht
Nice, edac memory error reporting also now works on Intel 12th-14th gen parts in W680 motherboards with ECC memory.
 
Last edited:
in case you are using additional dkms modules like r8168 you need to install proxmox-headers-6.17 too

so

Bash:
apt install proxmox-kernel-6.17 proxmox-headers-6.17

tested on my smol - 3x Lenovo Tiny M920q Cluster, with i5-8500T/32GB/512 NVMe and second r8168 nic installed in m2 wifi slot (on all 3 machines)

Bash:
$ dkms status
r8168/8.055.00, 6.14.11-4-pve, x86_64: installed
r8168/8.055.00, 6.17.1-1-pve, x86_64: installed

Additionally, 6.17 works fine on PBS 4.0.16 in KVM VM.

Edit: NO ZFS, standard setup with lvm and ext4
 
Last edited:
So far so Good, Intel Arc Pro B50 is now working great with SR-IOV with 6 Virtual Functions (Currently).
AMD: 5950X

VM(s): 12 Total Mostly Linux but a few Windows VMs (Thin-Clients)
ZFS filesytem with 3 different pools including boot.
 
Upgraded and running good on WRX90E + 9955WX + 2TB of ram with SR-IOV. A windows VM and few debian based vm's. Few ZFS pools including boot.
 
Ok, So It seems to work on my machine, and The Kernel panic that I got when I tried patching the Ubuntu kernel 6.17 , and running podman in a container on ZFS that would panic, works with this kernel. But there is still a regression in AppAromr 5.0 that is in this Kernel

This is running olama in a container installed with Proxmox Helper Scripts:
Code:
[   20.388102] audit: type=1400 audit(1760423742.565:288): apparmor="DENIED" operation="sendmsg" class="file" namespace="root//lxc-103_<-var-lib-lxc>" profile="rsyslogd" name="/run/systemd/journal/dev-log" pid=2193 comm="systemd-journal" requested_mask="r" denied_mask="r" fsuid=100000 ouid=100000
[   20.388657] audit: type=1400 audit(1760423742.565:289): apparmor="DENIED" operation="sendmsg" class="file" namespace="root//lxc-103_<-var-lib-lxc>" profile="rsyslogd" name="/run/systemd/journal/dev-log" pid=2193 comm="systemd-journal" requested_mask="r" denied_mask="r" fsuid=100000 ouid=100000
[   20.405604] audit: type=1400 audit(1760423742.584:290): apparmor="DENIED" operation="sendmsg" class="file" namespace="root//lxc-103_<-var-lib-lxc>" profile="rsyslogd" name="/run/systemd/journal/dev-log" pid=2193 comm="systemd-journal" requested_mask="r" denied_mask="r" fsuid=100000 ouid=100000
[   20.407643] audit: type=1400 audit(1760423742.586:291): apparmor="DENIED" operation="sendmsg" class="file" namespace="root//lxc-103_<-var-lib-lxc>" profile="rsyslogd" name="/run/systemd/journal/dev-log" pid=2193 comm="systemd-journal" requested_mask="r" denied_mask="r" fsuid=100000 ouid=100000
[   20.426177] audit: type=1400 audit(1760423742.606:292): apparmor="DENIED" operation="sendmsg" class="file" namespace="root//lxc-103_<-var-lib-lxc>" profile="rsyslogd" name="/run/systemd/journal/dev-log" pid=2193 comm="systemd-journal" requested_mask="r" denied_mask="r" fsuid=100000 ouid=100000
[   20.426227] audit: type=1400 audit(1760423742.606:293): apparmor="DENIED" operation="sendmsg" class="file" namespace="root//lxc-103_<-var-lib-lxc>" profile="rsyslogd" name="/run/systemd/journal/dev-log" pid=2193 comm="systemd-journal" requested_mask="r" denied_mask="r" fsuid=100000 ouid=100000
[   20.426481] audit: type=1400 audit(1760423742.606:294): apparmor="DENIED" operation="sendmsg" class="file" namespace="root//lxc-103_<-var-lib-lxc>" profile="rsyslogd" name="/run/systemd/journal/dev-log" pid=2193 comm="systemd-journal" requested_mask="r" denied_mask="r" fsuid=100000 ouid=100000
[   49.913837] audit: type=1400 audit(1760423772.634:295): apparmor="DENIED" operation="sendmsg" class="file" namespace="root//lxc-103_<-var-lib-lxc>" profile="rsyslogd" name="/run/systemd/journal/dev-log" pid=2193 comm="systemd-journal" requested_mask="r" denied_mask="r" fsuid=100000 ouid=100000
The problem is that it is seeing a socket as a file.

Something like this will fix it.
Diff:
---
 security/apparmor/af_unix.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/security/apparmor/af_unix.c b/security/apparmor/af_unix.c
--- a/security/apparmor/af_unix.c
+++ b/security/apparmor/af_unix.c
@@ -406,12 +406,19 @@ static int profile_peer_perm(struct aa_profile *profile, const char *op,
     if (state) {
         struct aa_profile *peerp;
 
-        if (peer_path)
-            return unix_fs_perm(ad->op, request, ad->subj_cred,
-                        &profile->label, peer_path);
-        else if (path)
-            return unix_fs_perm(ad->op, request, ad->subj_cred,
-                        &profile->label, path);
+        /* Don't use file-based permissions for message passing.
+         * sendmsg/recvmsg should use socket permissions, not file r/w.
+         */
+        if ((peer_path || path) &&
+            strcmp(ad->op, OP_SENDMSG) != 0 && strcmp(ad->op, OP_RECVMSG) != 0) {
+            if (peer_path)
+                return unix_fs_perm(ad->op, request, ad->subj_cred,
+                            &profile->label, peer_path);
+            else if (path)
+                return unix_fs_perm(ad->op, request, ad->subj_cred,
+                            &profile->label, path);
+        }
+        /* For sendmsg/recvmsg, skip fs checks and use socket mediation */
         state = match_to_peer(rules->policy, state, request,
                       unix_sk(sk),
                       peer_addr, peer_addrlen, &p, &ad->info);

The issue isn't major though.
 
Update tested on:
- Xeon E3-1225 V2 (PBS)
- PVE as a VM in ESXi and PBS as a VM in PVE in ESXi (don’t ask ^^ - Xeon(R) Silver 4214R)
- EPYC 7402 (PVE)
- Xeon(R) CPU E5-1620 v2 (PVE)
and everything looks fine until now. Thanks a lot !

EDIT: also tested:
A10-7880E as PVE
Xeon(R) Silver 4214R as PVE (baremetal this time)
EPYC 7402 (PVE, 3 nodes cluster) runs ZFS pool.
regular backups to PBS cited above.
 
Ok, So It seems to work on my machine, and The Kernel panic that I got when I tried patching the Ubuntu kernel 6.17 , and running podman in a container on ZFS that would panic, works with this kernel. But there is still a regression in AppAromr 5.0 that is in this Kernel

This is running olama in a container installed with Proxmox Helper Scripts:
Code:
[   20.388102] audit: type=1400 audit(1760423742.565:288): apparmor="DENIED" operation="sendmsg" class="file" namespace="root//lxc-103_<-var-lib-lxc>" profile="rsyslogd" name="/run/systemd/journal/dev-log" pid=2193 comm="systemd-journal" requested_mask="r" denied_mask="r" fsuid=100000 ouid=100000
[   20.388657] audit: type=1400 audit(1760423742.565:289): apparmor="DENIED" operation="sendmsg" class="file" namespace="root//lxc-103_<-var-lib-lxc>" profile="rsyslogd" name="/run/systemd/journal/dev-log" pid=2193 comm="systemd-journal" requested_mask="r" denied_mask="r" fsuid=100000 ouid=100000
[   20.405604] audit: type=1400 audit(1760423742.584:290): apparmor="DENIED" operation="sendmsg" class="file" namespace="root//lxc-103_<-var-lib-lxc>" profile="rsyslogd" name="/run/systemd/journal/dev-log" pid=2193 comm="systemd-journal" requested_mask="r" denied_mask="r" fsuid=100000 ouid=100000
[   20.407643] audit: type=1400 audit(1760423742.586:291): apparmor="DENIED" operation="sendmsg" class="file" namespace="root//lxc-103_<-var-lib-lxc>" profile="rsyslogd" name="/run/systemd/journal/dev-log" pid=2193 comm="systemd-journal" requested_mask="r" denied_mask="r" fsuid=100000 ouid=100000
[   20.426177] audit: type=1400 audit(1760423742.606:292): apparmor="DENIED" operation="sendmsg" class="file" namespace="root//lxc-103_<-var-lib-lxc>" profile="rsyslogd" name="/run/systemd/journal/dev-log" pid=2193 comm="systemd-journal" requested_mask="r" denied_mask="r" fsuid=100000 ouid=100000
[   20.426227] audit: type=1400 audit(1760423742.606:293): apparmor="DENIED" operation="sendmsg" class="file" namespace="root//lxc-103_<-var-lib-lxc>" profile="rsyslogd" name="/run/systemd/journal/dev-log" pid=2193 comm="systemd-journal" requested_mask="r" denied_mask="r" fsuid=100000 ouid=100000
[   20.426481] audit: type=1400 audit(1760423742.606:294): apparmor="DENIED" operation="sendmsg" class="file" namespace="root//lxc-103_<-var-lib-lxc>" profile="rsyslogd" name="/run/systemd/journal/dev-log" pid=2193 comm="systemd-journal" requested_mask="r" denied_mask="r" fsuid=100000 ouid=100000
[   49.913837] audit: type=1400 audit(1760423772.634:295): apparmor="DENIED" operation="sendmsg" class="file" namespace="root//lxc-103_<-var-lib-lxc>" profile="rsyslogd" name="/run/systemd/journal/dev-log" pid=2193 comm="systemd-journal" requested_mask="r" denied_mask="r" fsuid=100000 ouid=100000
The problem is that it is seeing a socket as a file.
Unix sockets can be files though!

Something like this will fix it.
Diff:
---
 security/apparmor/af_unix.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/security/apparmor/af_unix.c b/security/apparmor/af_unix.c
--- a/security/apparmor/af_unix.c
+++ b/security/apparmor/af_unix.c
@@ -406,12 +406,19 @@ static int profile_peer_perm(struct aa_profile *profile, const char *op,
     if (state) {
         struct aa_profile *peerp;
 
-        if (peer_path)
-            return unix_fs_perm(ad->op, request, ad->subj_cred,
-                        &profile->label, peer_path);
-        else if (path)
-            return unix_fs_perm(ad->op, request, ad->subj_cred,
-                        &profile->label, path);
+        /* Don't use file-based permissions for message passing.
+         * sendmsg/recvmsg should use socket permissions, not file r/w.
+         */
+        if ((peer_path || path) &&
+            strcmp(ad->op, OP_SENDMSG) != 0 && strcmp(ad->op, OP_RECVMSG) != 0) {
+            if (peer_path)
+                return unix_fs_perm(ad->op, request, ad->subj_cred,
+                            &profile->label, peer_path);
+            else if (path)
+                return unix_fs_perm(ad->op, request, ad->subj_cred,
+                            &profile->label, path);
+        }
+        /* For sendmsg/recvmsg, skip fs checks and use socket mediation */
         state = match_to_peer(rules->policy, state, request,
                       unix_sk(sk),
                       peer_addr, peer_addrlen, &p, &ad->info);

The issue isn't major though.

Or use the correct ABI over LLM generated patches as Fiona pointed out at https://forum.proxmox.com/threads/minor-apparmor-problem-with-tor.173419/#post-808186
 
I will test later today the Genoa Servers (9374F + RTX6000) -> Because one above reported dkms issues with 6.17 and Nvidia Drivers.

So Far this Servers Work:
- MS-A2 (AMD 9955HX) -> No Issues
- ....

Just for those who are Curious, the biggest Changes since 6.14 to 6.17:

Rich (BB code):
Summary of Improvements in Linux Kernels 6.15 - 6.17

Kernel VersionCategoryFeature / ImprovementDescription & Impact
6.15VirtualizationAMD INVLPGB Instruction SupportImproves KVM efficiency by enabling more efficient broadcast TLB (Translation Lookaside Buffer) invalidation across cores.
6.15VirtualizationARM KVM Nested VirtualizationAdds nested virtualization support for the Virtual Generic Interrupt Controller v3 (VGICv3), enabling modern nested hypervisors on ARM64.
6.15FilesystemsExt4 Copy-on-Write (CoW)Introduces initial Copy-on-Write support for Ext4.
6.15FilesystemsBtrfs Zstd CompressionEnhances the performance of Zstd data compression with new fast and real-time operational modes.
6.15I/O & Networkingio_uring Zero-Copy ReceiveExtends zero-copy performance benefits to network ingress operations within the io_uring asynchronous I/O framework.
6.15Securityfwctl SubsystemProvides a new, standardized interface for managing system firmware, helping to secure the firmware supply chain.
6.15DriversInitial NOVA DriverAdds early-stage support for the NOVA open-source driver for NVIDIA GPUs.
6.16CPU ArchitectureIntel Advanced Performance Extensions (APX)Merges foundational support for APX, which doubles the number of general-purpose registers on x86 from 16 to 32, creating groundwork for future performance gains.
6.16FilesystemsExt4 Large Folio SupportDelivers a significant performance boost (up to 37% in large sequential I/O) by allowing the filesystem to manage memory in larger, contiguous chunks.
6.16FilesystemsXFS Large Atomic WritesAdds support for large atomic writes, improving reliability and data consistency.
6.16VirtualizationIntel Trusted Domain Extensions (TDX)Integrates initial support for TDX, a key confidential computing technology that protects virtual machine memory from the hypervisor.
6.16I/O & NetworkingZero-Copy TCP Send (DMABUF)Allows sending TCP payloads directly from DMABUF memory, which dramatically lowers latency and CPU overhead by avoiding redundant data copies.
6.16SecuritySafer CoredumpsImplements a more secure method for handling coredumps via an AF_UNIX socket, reducing the attack surface by removing the need for privileged user-mode helpers.
6.17CPU ArchitectureAMD Hardware Feedback Interface (HFI)Introduces a new driver that allows the kernel scheduler to make more effective workload placement decisions on AMD Ryzen and EPYC CPUs by using real-time performance data from the cores.
6.17Filesystems (Btrfs)Experimental Large Folio SupportAligns Btrfs with other filesystems by adding experimental support for large folios to improve I/O throughput and reduce memory overhead.
6.17Filesystems (Btrfs)Metadata Optimizations (Btrfs)Achieves major performance gains through denser XArray keys (50-70% reduction in metadata leaf nodes) and caching for free space bitmaps (~20% runtime improvement in specific benchmarks).
6.17VirtualizationSmarter AMD SEV Cache FlushingOptimizes performance for AMD SEV confidential VMs by ensuring cache flushes only occur on CPUs that the guest actively used, reducing system-wide overhead.
6.17VirtualizationGPU Virtualization (SR-IOV)Adds Single Root I/O Virtualization (SR-IOV) support for Intel Battlemage Arc Pro GPUs, enabling enterprise-grade GPU resource partitioning.
6.17Core KernelLive Patching for ARM64Introduces support for applying security patches and fixes to the kernel on 64-bit Arm platforms without requiring a reboot.
6.17Core KernelNew Filesystem SyscallsAdds file_getattr() and file_setattr() system calls to provide modern interfaces for managing file metadata.
6.17NetworkingDualPI2 Congestion ControlIncorporates support for the DualPI2 congestion-control protocol for managing high-bandwidth and low-latency network traffic.