Opt-in Linux 6.11 Kernel for Proxmox VE 8 available on test & no-subscription

Trying to install it on my NUC hangs forever on:

Code:
Preparing to unpack .../proxmox-headers-6.11.0-1-pve_6.11.0-1_amd64.deb ...
Unpacking proxmox-headers-6.11.0-1-pve (6.11.0-1)
 
Hi everyone,

I'm exploring SR-IOV for Intel 13th Generation GPUs and was wondering if the kernel 6.11 supports this feature.

I've seen mentions of SR-IOV being available on Intel GPUs with newer drivers and kernel versions, but I want to confirm if this setup is compatible with the current Proxmox kernel. Has anyone tested or successfully set this up?

Thanks in advance for any insights or tips!
 
Hi everyone,

I'm exploring SR-IOV for Intel 13th Generation GPUs and was wondering if the kernel 6.11 supports this feature.

I've seen mentions of SR-IOV being available on Intel GPUs with newer drivers and kernel versions, but I want to confirm if this setup is compatible with the current Proxmox kernel. Has anyone tested or successfully set this up?

Thanks in advance for any insights or tips!
You should read https://forum.proxmox.com/threads/o...le-on-test-no-subscription.156818/post-723382 and related posts.
 
  • Like
Reactions: migueljal
Hi everyone,

I'm exploring SR-IOV for Intel 13th Generation GPUs and was wondering if the kernel 6.11 supports this feature.

I've seen mentions of SR-IOV being available on Intel GPUs with newer drivers and kernel versions, but I want to confirm if this setup is compatible with the current Proxmox kernel. Has anyone tested or successfully set this up?

Thanks in advance for any insights or tips!
Kernel 6.11 is confirmed as working with SR-IOV on 13th gen iGPUs, using this DKMS driver, though some people have reported it not working on specific systems, with no pattern I can see yet:
https://github.com/strongtz/i915-sriov-dkms

For an excellent tutorial on getting this up and going, see: https://www.derekseaman.com/2024/07...u-vt-d-passthrough-with-intel-alder-lake.html
These instructions will get the Proxmox node configured, and show you how to configure a Windows VM as well.

For Linux VM and LXC config, use the instructions on the GitHub.
If you need any more help, open an issue on the GitHub. The community there is great, and the driver is under active development since keeping SR-IOV working is a moving target until Intel actually updates the Xe driver and we can stop using the DKMS modified i915 driver.

N.B. -- Not included in guides: This process will fail if either the host or the VM is loading the Xe driver alongside the i915 driver, so you'll need to disable it in the kernel command. Add this to your kernel command (update with GRUB or SystemD as explained in the linked tutorial): modprobe.blacklist=xe . I always put it just before any iommu-related arguments.
 
  • Like
Reactions: migueljal
I'm running Proxmox 8.3, with the 6.11 kernel - and I'm hitting an issue where the VM guests don't have network connectivity on first startup - until you toggle the network interface up then down (e.g. ifup, ifdown) - and then on some hosts, even that doesn't work =(.


Has anybody seen something like this? Or any tips on how to debug it?

(At first, I thought it was the IOMMU issue here - but it doesn't seem like it after all, as that issue is already fixed).
 
the interface not found message might indicate that your interfaces were renamed with the new kernel - check the output of `ip link`, and update your /etc/network/interfaces - see also:
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#_naming_conventions

I hope this helps!
@Stoiko - Really appreciate the suggestion to compare ip link - however, it seems the kernel thing might be a red herring....or maybe I'm reading it wrong.

This is really frustrating and I'm super confused.

This is the latest 6.11 kernel on Proxmox:
Code:
# uname -a
Linux grandstand-vm02 6.11.0-2-pve #1 SMP PREEMPT_DYNAMIC PMX 6.11.0-2 (2024-12-04T10:29Z) x86_64 GNU/Linux

And the output of ip link from the Proxmox host, booted under this 6.11 kernel:

https://gist.github.com/victorhooi/4385bbeff0eb6489c85637a61df9aad4

And this is the latest 6.8 kernel on Proxmox
Code:
# uname -a
Linux grandstand-vm02 6.8.12-6-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-6 (2024-12-19T19:05Z) x86_64 GNU/Linux

And the same output of ip link from the Proxmox host, booted under this 6.8 kernel:

https://gist.github.com/victorhooi/582ea69a1c8ee9215e45bb805ff590e5

From a quick diff - the device names are the same between the two - just the MAC addresses have all changed - does that seem right?

When I look at the VM layer - when the VMs boot up, they have no network connectivity. And this is the output of ip link from an example VM:

Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens18: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether bc:24:11:bd:a1:f8 brd ff:ff:ff:ff:ff:ff
    altname enp6s18
    altname enxbc2411bda1f8
3: tailscale0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1280 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 500
    link/none

If I try sudo ifupdown ens18 it replies with ifup: unknown interface ens18. However, sudo ifup end6s18 does work - this successfully gets a DHCP lease.

Another data-point - I have a TrueNAS Scale VM which doesn't seem to hit this issue.

But all the Debian-based OSes have this issue =(.

And there's also an ArubaOS VM also has this issue =(. (AFAIK, ArubaOS is HPE's own custom embedded Linux OS - not sure the lineage).

The last part - where ArubaOS also hits this issue - is why I am thinking it's something that's changed in the Proxmox setup - since the ArubaOS VM hasn't been updated in several months.

Here's also the domes output from a VM:

https://gist.github.com/victorhooi/f66285d0ed3c94e6574f547549e9a2a1

I did see this line in there - but I think that's normal:

Code:
[    1.100303] virtio_net virtio2 ens18: renamed from eth0

Are there any other obvious things to check here? Have there been any other major changes in Proxmox, or would this still potentially be kernel update related?
 
I'm glad to see 6.11 kernel for PVE. This is opening a window of opportunity for me as I can test BcacheFS now.

Is there any documentation how to manually compile newer kernels for PVE?
I've checked https://github.com/proxmox/pve-kernel in hope to see upstream kernel sources + proxmox patches but browsing submodules show me 404.
Until then, my guesstimate was to pull 6.12 kernel tree from https://evilpiepirate.org/git/bcachefs.git/ + add proxmox patches + build using proxmox scripts from repo.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!