Opt-in Linux 6.11 Kernel for Proxmox VE 8 available on test & no-subscription

Trying to install it on my NUC hangs forever on:

Code:
Preparing to unpack .../proxmox-headers-6.11.0-1-pve_6.11.0-1_amd64.deb ...
Unpacking proxmox-headers-6.11.0-1-pve (6.11.0-1)
 
Hi everyone,

I'm exploring SR-IOV for Intel 13th Generation GPUs and was wondering if the kernel 6.11 supports this feature.

I've seen mentions of SR-IOV being available on Intel GPUs with newer drivers and kernel versions, but I want to confirm if this setup is compatible with the current Proxmox kernel. Has anyone tested or successfully set this up?

Thanks in advance for any insights or tips!
 
Hi everyone,

I'm exploring SR-IOV for Intel 13th Generation GPUs and was wondering if the kernel 6.11 supports this feature.

I've seen mentions of SR-IOV being available on Intel GPUs with newer drivers and kernel versions, but I want to confirm if this setup is compatible with the current Proxmox kernel. Has anyone tested or successfully set this up?

Thanks in advance for any insights or tips!
You should read https://forum.proxmox.com/threads/o...le-on-test-no-subscription.156818/post-723382 and related posts.
 
  • Like
Reactions: migueljal
Hi everyone,

I'm exploring SR-IOV for Intel 13th Generation GPUs and was wondering if the kernel 6.11 supports this feature.

I've seen mentions of SR-IOV being available on Intel GPUs with newer drivers and kernel versions, but I want to confirm if this setup is compatible with the current Proxmox kernel. Has anyone tested or successfully set this up?

Thanks in advance for any insights or tips!
Kernel 6.11 is confirmed as working with SR-IOV on 13th gen iGPUs, using this DKMS driver, though some people have reported it not working on specific systems, with no pattern I can see yet:
https://github.com/strongtz/i915-sriov-dkms

For an excellent tutorial on getting this up and going, see: https://www.derekseaman.com/2024/07...u-vt-d-passthrough-with-intel-alder-lake.html
These instructions will get the Proxmox node configured, and show you how to configure a Windows VM as well.

For Linux VM and LXC config, use the instructions on the GitHub.
If you need any more help, open an issue on the GitHub. The community there is great, and the driver is under active development since keeping SR-IOV working is a moving target until Intel actually updates the Xe driver and we can stop using the DKMS modified i915 driver.

N.B. -- Not included in guides: This process will fail if either the host or the VM is loading the Xe driver alongside the i915 driver, so you'll need to disable it in the kernel command. Add this to your kernel command (update with GRUB or SystemD as explained in the linked tutorial): modprobe.blacklist=xe . I always put it just before any iommu-related arguments.
 
  • Like
Reactions: migueljal
I'm running Proxmox 8.3, with the 6.11 kernel - and I'm hitting an issue where the VM guests don't have network connectivity on first startup - until you toggle the network interface up then down (e.g. ifup, ifdown) - and then on some hosts, even that doesn't work =(.


Has anybody seen something like this? Or any tips on how to debug it?

(At first, I thought it was the IOMMU issue here - but it doesn't seem like it after all, as that issue is already fixed).
 
the interface not found message might indicate that your interfaces were renamed with the new kernel - check the output of `ip link`, and update your /etc/network/interfaces - see also:
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#_naming_conventions

I hope this helps!
@Stoiko - Really appreciate the suggestion to compare ip link - however, it seems the kernel thing might be a red herring....or maybe I'm reading it wrong.

This is really frustrating and I'm super confused.

This is the latest 6.11 kernel on Proxmox:
Code:
# uname -a
Linux grandstand-vm02 6.11.0-2-pve #1 SMP PREEMPT_DYNAMIC PMX 6.11.0-2 (2024-12-04T10:29Z) x86_64 GNU/Linux

And the output of ip link from the Proxmox host, booted under this 6.11 kernel:

https://gist.github.com/victorhooi/4385bbeff0eb6489c85637a61df9aad4

And this is the latest 6.8 kernel on Proxmox
Code:
# uname -a
Linux grandstand-vm02 6.8.12-6-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-6 (2024-12-19T19:05Z) x86_64 GNU/Linux

And the same output of ip link from the Proxmox host, booted under this 6.8 kernel:

https://gist.github.com/victorhooi/582ea69a1c8ee9215e45bb805ff590e5

From a quick diff - the device names are the same between the two - just the MAC addresses have all changed - does that seem right?

When I look at the VM layer - when the VMs boot up, they have no network connectivity. And this is the output of ip link from an example VM:

Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens18: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether bc:24:11:bd:a1:f8 brd ff:ff:ff:ff:ff:ff
    altname enp6s18
    altname enxbc2411bda1f8
3: tailscale0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1280 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 500
    link/none

If I try sudo ifupdown ens18 it replies with ifup: unknown interface ens18. However, sudo ifup end6s18 does work - this successfully gets a DHCP lease.

Another data-point - I have a TrueNAS Scale VM which doesn't seem to hit this issue.

But all the Debian-based OSes have this issue =(.

And there's also an ArubaOS VM also has this issue =(. (AFAIK, ArubaOS is HPE's own custom embedded Linux OS - not sure the lineage).

The last part - where ArubaOS also hits this issue - is why I am thinking it's something that's changed in the Proxmox setup - since the ArubaOS VM hasn't been updated in several months.

Here's also the domes output from a VM:

https://gist.github.com/victorhooi/f66285d0ed3c94e6574f547549e9a2a1

I did see this line in there - but I think that's normal:

Code:
[    1.100303] virtio_net virtio2 ens18: renamed from eth0

Are there any other obvious things to check here? Have there been any other major changes in Proxmox, or would this still potentially be kernel update related?
 
I'm glad to see 6.11 kernel for PVE. This is opening a window of opportunity for me as I can test BcacheFS now.

Is there any documentation how to manually compile newer kernels for PVE?
I've checked https://github.com/proxmox/pve-kernel in hope to see upstream kernel sources + proxmox patches but browsing submodules show me 404.
Until then, my guesstimate was to pull 6.12 kernel tree from https://evilpiepirate.org/git/bcachefs.git/ + add proxmox patches + build using proxmox scripts from repo.
 
Hello,

I tried this 6.11 kernel on my HW but it does not activate the network card (which works perfectly fine on a different OS/kernel)

Odd thing is that none of the 6.x PVE kernels is able to activate my NIC but the same NIC worked fine with PVE 7.x using kernels of the 5.x family.

Even oddest is that ArcLinux, latest available ISO using kernel 6.8.2, works perfectly fine on the same HW (made a test boot using the installation ISO without actually installing it), I could activate the NIC and perform some network tests without noticing anything odd.

The HW is
  • Dell Wyse 5070 ThinClient (on latest Dell BIOS)
  • NIC is Intel X520-2 (but a I350 behave the same)
Attached you will find
  • dmesg output for PVE 6.11 kernel and for ArcLinux 6.8 kernel
  • lspci of the PVE 6.11kernel
  • a screenshot showing the NIC working on ArcLinux (enp1s0f0/enp1s0f1 are the 2 ports of X520, enp2s0 is the builtin Realtek)
While researching the issue I have seen other posts around of people having activation issues of PCI boards using PVE 6.x kernels.

Possibly this is related to the way the kernel interact with the BIOS and the real "culprit" is my system, but as I shown other kernels are more tolerant of my HW combination.

I even tried disabling PCI advanced error reporting (see dmesg kernel bot params) without any result.
I was trying to think about checking the kernel compilation parameter, but things got even more complicated than the last time I compiled a kernel 25+years ago: the config file is almost 12000 lines.
A quick diff between ArcLinux and PVE config files revealed almost 1200 differences (comments excluded).
This is beyond my knowledge and resources.

I hope that some kernel/PCI guru direct his benign eye/knowledge to this issue and can enlighten me, and possibly many others in my situation!

For the time being I thank you all for your time and attention and wish a merry festive season.

Cheers,

Max

P.S. I know I am pushing this little poor thin client beyond Dell specification pretending to use 2x10Gb ports and 32GB of RAM (which Linux also seems not be able to use in full given the lack of sufficient MTRR) but I like to experiment without using too much electricity, making noise and with the space (and money, all is used stuff) I have available so that all is within my WAF, with is very low ;-)
 

Attachments

could you try booting with all additional kernel cmdline options removed `intel_iommu=on iommu=pt pci=noaer` - just remove them and try booting again:
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot_edit_kernel_cmdline
Hello Stoiko,

thanks for your assistance.

I tried your suggestion but the outcome is exactly the same as with the kernel parameters, which I added to be able to split the NIC functions between 2 VM.
Dmesg output is the same as I reported above.

I hope you see other options that will allow me to welcome 2025 with a working PVE

Thanks,

Max
 
Running 6.11 on two PVE servers. Both are running ZFS file system, and have 2-3 VMs on each. No issues observed, both in the installation and in normal operations.
 
Hi,

after updating kernel to 6.11 I have a problem with accessing iDRAC in DELL 12th and 13th generation servers (iDRAC 7 & 8). When the system starts the virtual console stops responding and the entire iDRAC interface stops working. Rebooting back to kernel 6.8 solves the problem. I checked it on 10 different servers, same problem everywhere.

iDRAC logs:
RAC0182 The iDRAC firmware was rebooted with the following reason: watchdog.
RAC0708 Previous reboot was due to a firmware watchdog timeout.

Is there anything I can check or change to solve the problem and use the 6.11 kernel?

Regards,
Bartek
 
Hi,

after updating kernel to 6.11 I have a problem with accessing iDRAC in DELL 12th and 13th generation servers (iDRAC 7 & 8). When the system starts the virtual console stops responding and the entire iDRAC interface stops working. Rebooting back to kernel 6.8 solves the problem. I checked it on 10 different servers, same problem everywhere.

iDRAC logs:
RAC0182 The iDRAC firmware was rebooted with the following reason: watchdog.
RAC0708 Previous reboot was due to a firmware watchdog timeout.

Is there anything I can check or change to solve the problem and use the 6.11 kernel?

Regards,
Bartek
I'm not sure how kernel 6.11 triggers this (I've never used a Dell server, so I don't understand how iDrac interacts with the host OS), but it looks like they've known about and patched some issues with the watchdog timer in some of their newer systems.

What version of iDRAC are you running?
See: https://www.dell.com/support/kbdoc/...drac9-rac0182-the-idrac-firmware-was-rebooted
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!