Opt-in Linux Kernel 5.15 for Proxmox VE 7.x available

Depends on the feedback, it's available on pvetest repo as of now if you want to test it already.

https://pve.proxmox.com/wiki/Package_Repositories#sysadmin_test_repo

5.15.19-2-pve did not resolve the issue with no console output for me.


I had followed the work around above to add simplefb to /etc/initramfs-tools/modules, which successfully resolved the problem on both 5.15.17-1-pve and 5.15.19-1-pve.

I started on a node running 5.15.19-1-pve with the workaround, and attempted to back it out to reproduce the issue before upgrading the kernel; however, I couldn't reproduce the issue (I can see simplefb loading immediately at boot time in dmesg, so clearly I haven't successfully backed out the workaround).

Code:
sed -i '/^simplefb/d' /etc/initramfs-tools/modules
update-initramfs -c -k 5.15.19-1-pve
proxmox-boot-tool refresh

So, I moved to a node running 5.15.17-1-pve, removed simplefb from /etc/initramfs-tools/modules and then installed 5.15.19-1-pve. At this point I had no console output with 5.15.19-1-pve on this node - great, problem reproduced.

I then updated to 5.15.19-2-pve, and still have no console output.

It doesn't look like the simplefb module made it into the initramfs:

Code:
root@pve01:~# lsinitramfs /boot/initrd.img-5.15.19-2-pve  | grep simple
root@pve01:~#

Expected output (where I applied the workaround):

Code:
root@pve02:~# lsinitramfs /boot/initrd.img-5.15.19-1-pve  | grep simple
usr/lib/modules/5.15.19-1-pve/kernel/drivers/video/fbdev/simplefb.ko
root@pve02:~#
 
Last edited:
5.15.19-2-pve did not resolve the issue with no console output for me.
Works just fine here on a vanilla test server and two other workstations of colleagues that were involved in testing this, so maybe it's something you (de)configured locally.

Bash:
lsinitramfs /boot/initrd.img-5.15.19-2-pve | rg simple
usr/lib/modules/5.15.19-2-pve/kernel/drivers/video/fbdev/simplefb.ko

What can be is that you did not update the pve-kernel-helper package to at least version 7.1-12, as that ships the config file to tell initramfs-tools to inlcude simplefb now (cannot be shipped by kernel package itself, as multiple of different kernel ABI's can be installed at the same time, so it'd result in a file conflict for package managemet)
IOW., you probably do not have the following file (ignore the slightly wrong name, simpledrm does not plays well with nvidia yet, so we had to switch back to simplefb, the file name staid):
cat /usr/share/initramfs-tools/modules.d/proxmox-simpledrm
 
What can be is that you did not update the pve-kernel-helper package to at least version 7.1-12, as that ships the config file to tell initramfs-tools to inlcude simplefb now (cannot be shipped by kernel package itself, as multiple of different kernel ABI's can be installed at the same time, so it'd result in a file conflict for package managemet)

Thanks Thomas, that was indeed it. I upgraded pve-kernel-helper from 7.1-10 to 7.1-12 and the console output is working now. So, both the pve kernel and the pve-kernel-helper package need to be upgraded for this workaround. Thanks!
 
I've tried 5.15 a couple of times since 5.13 broke my GPU passthrough. While I am able to get my LSI card passed through successfully, my GPU is never passed through, as the VM hangs on the BAR kernel message. I attempted again tonight with the updated 5.15 kernel as associated updates, but am still having no luck with my GPU passthrough.

I am back on 5.13 where everything works perfectly, but would be willing to test anything out if it helps.

Nvidia 1650 Super and a Dell T7810 with dual E5-2690-V3.
 
Last edited:
I've tried 5.15 a couple of times since 5.13 broke my GPU passthrough. While I am able to get my LSI card passed through successfully, my GPU is never passed through, as the VM hangs on the BAR kernel message. I attempted again tonight with the updated 5.15 kernel as associated updates, but am still having no luck with my GPU passthrough.

I am back on 5.13 where everything works perfectly, but would be willing to test anything out if it helps.

Nvidia 1650 Super and a Dell T7810 with dual E5-2690-V3.
That message is often caused by the host console not releasing the framebuffer memory (when the passthrough GPU is used during system boot). You could try video=simplefb:off (similar as for vesafb and efifb for earlier kernels), but you will not have a host console or boot messages any more.
 
I have an issue on 15.15.19-2 passing UHD630 igpu to a VM .
It works on 5.13.19-3

The following message floods the logs

vfio-pci 0000:00:02.0[B]: BAR 2: can't reserve [mem 0x80000000-0x8fffffff 64bit pref][/B]

I have added video=simplefb:off to my grub cmdline

Any Ideas ?
 
That message is often caused by the host console not releasing the framebuffer memory (when the passthrough GPU is used during system boot). You could try video=simplefb:off (similar as for vesafb and efifb for earlier kernels), but you will not have a host console or boot messages any more.
I've tried adding that, both in conjunction with the vesafb and efifb stuff and without; neither seemed to make a difference to the passthrough problem.
 
Last edited:
I've tried adding that, both in conjunction with the vesafb and efifb stuff and without; neither seemed to make a difference to the passthrough problem.
I've also had the same experience with bar reservation errors on the 5.15.19-2 kernel when using the boot-gpu (nvidia), passthrough of the non-boot gpu seems to work fine (using the same card). For now I've just rolled back to the 5.13.19-3 kernel and boot-gpu passthrough seems to work fine there.
 
Hello is it possible to get a ISO Image with the new Kernel. I ask because i have a new Intel Alder Lake CPU and could not install Proxmox. (7.1-2) Maybe the new Kernel will support the new CPU.

Thank you
 
I have been trying kernel 5.15.19-2 and I still get the error about BAR 0 - Can't reserve memory. When I cat out /proc/iomem I can see that "BOOTFB" has taken up the memory that it is trying to reserve. I am using kernel 5.11.22-7-pve and that works fine except that certain things fail in the Windows guest. The command line I am using is
Code:
BOOT_IMAGE=/boot/vmlinuz-5.11.22-7-pve root=/dev/mapper/pve-root ro quiet quiet amd_iommu=on iommu=pt video=simplefb:off

I can gather more information if needed.
 
Last edited:
Hello is it possible to get a ISO Image with the new Kernel. I ask because i have a new Intel Alder Lake CPU and could not install Proxmox. (7.1-2) Maybe the new Kernel will support the new CPU.
It's planned to release that with the official ISO with PVE 7.2 planned for Q2, but we can look into creating a test ISO for those with very recent HW. Out of interest, at what stage did the installer fail?
 
  • Like
Reactions: themaster
Thank you for providing this newer kernel. This solved my reboot issues on my Asus PN50 (AMD Ryzen 7 4700U). On startup the network service failed to start until manually restarted. This is resolved with this newer kernel.

I have not noticed any new issues being introduced with this newer kernel version.
 
It's planned to release that with the official ISO with PVE 7.2 planned for Q2, but we can look into creating a test ISO for those with very recent HW. Out of interest, at what stage did the installer fail?
Tried a few ssds and a reset the Bios.
 

Attachments

  • Proxmox Error 1.jpg
    Proxmox Error 1.jpg
    22.9 KB · Views: 42
  • Proxmox Error 2.jpg
    Proxmox Error 2.jpg
    212.5 KB · Views: 43
Tried a few ssds and a reset the Bios.
The error doesn't seem related to the kernel, or at least not in combination with the CPU, as it seems the 5.13 one works just fine with your Alder Lake CPU, as else you wouldn't ever get that far into the installer.

What disk/device is on /dev/sda? Going into the debug shell on tty3 (CTRL + ALT + F3) and getting the output oif dmsetup info -C screen scraped would be potentially interesting.
Also, out of interest, as that error comes from the LVM setup part, can you try using ZFS for that installation?
 
The next Proxmox VE point release 7.2 (~ Q2/2022) will presumably use the 5.15 based Linux kernel.
You can test this kernel now, install the meta-package that pulls in the latest 5.15 kernel with:

Code:
apt update && apt install pve-kernel-5.15

It's not required to enable the pvetest repository, the opt-in kernel package is available on all repositories.

We invite you to test your hardware with this kernel, and we are thankful for receiving your feedback.

Please note that while we are trying to provide a stable experience with the Opt-in kernel 5.15, updates for this kernel may appear less frequently until Proxmox projects actually switch to it as their new default.

Hello,

I have a question: If I install a kernel used in the Debian 11 repository (not from the pve repository), will I have a problem or will I have to do some extra procedure to work in the proxmox environment?

I'm asking this question because I need a recent kernel (5.17) to properly enable SMB3 Multichannel on proxmox and I tested it with this kernel and only this one made the feature work satisfactorily.

Kernel 5.17 is in the debian repository (experimental - yes, I know the risks), but I will install it in a test environment and i will keep in test for while in this environment to get the SMB3 multichannel feature. My question is if I install a kernel that is not from the proxmox repository, it will have some functionality problem (if there is some kind of additional configuration done in the pve kernel that is not in the debian repository kernel).

Can anyone tell me about this?
 
If I install a kernel used in the Debian 11 repository (not from the pve repository), will I have a problem or will I have to do some extra procedure to work in the proxmox environment?
Not really related to this thread so a new thread would be better on followup questions. Anyhow, it can be done but it naturally has some caveats and it won't get any real support here or in other support channels on issues and you're rather on your own so to say. As the kernel is rather independent of any userspace tooling, besids ABI/Feature support (which normally limits one more in how far one can go back), and we often test some bugs/features in mainline kernel build ourself - I'd personally not use them in production though.

I'm asking this question because I need a recent kernel (5.17) to properly enable SMB3 Multichannel on proxmox and I tested it with this kernel and only this one made the feature work satisfactorily.
5.17 isn't even released upstream and still on RC5, so I'd only go for that if everything else fails.
Also, SMB3 multichannel support was initially added in the 5.5 Kernel (ref) with some subsequent big improvements landing in the 5.8 kernel, so, that feature is already included in current default 5.13 kernel or the newer, still opt-in 5.15 kernel; the 5.17 one only saw only some minor improvements regarding reconnects and code restructuring, so why'd you need an not yet release, still under development and heavily experimental build for that??

https://wiki.samba.org/index.php/SMB3_kernel_status#Multi_Channel

My question is if I install a kernel that is not from the proxmox repository, it will have some functionality problem (if there is some kind of additional configuration done in the pve kernel that is not in the debian repository kernel).

Can anyone tell me about this?
maybe, maybe not, depends a lot on your environment and actual HW/setup, nobody can give you guarantees for an not even released kernel.
 
Not really related to this thread so a new thread would be better on followup questions. Anyhow, it can be done but it naturally has some caveats and it won't get any real support here or in other support channels on issues and you're rather on your own so to say. As the kernel is rather independent of any userspace tooling, besids ABI/Feature support (which normally limits one more in how far one can go back), and we often test some bugs/features in mainline kernel build ourself - I'd personally not use them in production though.


5.17 isn't even released upstream and still on RC5, so I'd only go for that if everything else fails.
Also, SMB3 multichannel support was initially added in the 5.5 Kernel (ref) with some subsequent big improvements landing in the 5.8 kernel, so, that feature is already included in current default 5.13 kernel or the newer, still opt-in 5.15 kernel; the 5.17 one only saw only some minor improvements regarding reconnects and code restructuring, so why'd you need an not yet release, still under development and heavily experimental build for that??

https://wiki.samba.org/index.php/SMB3_kernel_status#Multi_Channel


maybe, maybe not, depends a lot on your environment and actual HW/setup, nobody can give you guarantees for an not even released kernel.
First, thanks for responding to me.

About the question: "so why'd you need an not yet release, still under development and heavily experimental build for that??"

Here's my answer: I've been testing kernels and configurations for a week for my environment to work with SMB3 multichannel. I tested with kernels 5.13 (from proxmox), 5.15 (from proxmox), 5.10 (debian stable ). 5.14 (debian backports), 5.15 (debian backports) and 5.17 (debian experimental). Only kernel 5.10 and 5.17 worked in my specific scenario multipath SMB3. I came to suspect that it was something compiled in the 5.13 and 5.15 kernel of proxmox, so I installed the 5.15 kernel (debian backports) and the multipath also didn't work anymore, leading me to the conclusion that something was changed in the source code by the developers that caused some problem for the functionality (my interfaces scenario without bond or lacp). My suspicion was confirmed when I searched the samba community and found the notification that more than 50 changes were made to the code (even it was all redone) for the SMB3 multipatch mechanism in this 5.17 kernel. And it was exactly the 5.17 kernel that worked SMB3 multichannel. I spent almost a week testing and trying to understand the reason and believe me, only 5.17 and 5.10 is that multipath works correctly (true distribution equals multipath iscsi). So this is why I'm looking for the 5.17 kernel. If I stick with the 5.10 kernel, it's not native to proxmox 7.1.2, that's why I asked about the situation of installing a kernel from debian's stable repository. The problem is with the CIFS module built into the kernel (version 2.35 is the module that adds the feature and has had many fixes in kernel version 5.17). On 5.13/5.15 kernels the cifs.ko module is still version 2.33/2.34 (with bugs that prevent multichannel in my scenario or have bugs that don't let the feature work properly).

I think multichannel SMB3 and pNFS will be a great option for those who need efficient load-balanced connections using only layer 3+4 (no need to use LACP or bond on the interfaces). So I'm focusing on this solution.

If you participate in the review of the 5.17 kernel to officially put in the future on proxmox, I recommend keeping an eye on this kernel to be made available as soon as possible, as this is the key to excellent performance distributing bandwidth without the need for LACP/Bond using just an SMB3 folder with multichannel capability and it will probably be a much better solution than LVM+iSCSI, because in addition to CIFS having better management in simultaneous accesses on competing clients, it will have maximum speed equally distributed on each network cable without need configuration in switch, just using the command "mount -t cifs -o multichannel" in a folder and putting proxmox in each client. Not to mention that it still allows the feature of multiple snapshots using the qcow2 format. A much better scenario than LVM+iSCSI and much simpler.

On the issue of multichannel working on kernels from 5.5 onwards: There is a difference between releasing a feature and it being permanently functional throughout additions of functions in the code and saying that it is compatible with the feature. The feature can work with a windows client to a samba server (it will work just fine). But the scenario here is a unique module in the kernel on the client side which is not used on the windows client. The cifs.ko module is a separate part of the resource and is used on the linux client side, it is precisely this module that is being matured and it only worked in version 5.10 (there was some code that worked well) and 5.17 (they added a fix that made the code).

The samba community article reporting the CIFS module scenario "

"[5.17 kernel (module version 2.35, 50 changesets so far)]: ...Reconnect improvements for DFS and multichannel use cases, and restructuring of multichannel code. Fixes to serialize mount attempts to avoid races, to fix memory leak during ntlmssp negotiate , and to DFS special character handling, and also important ACL fixes and fix for snapshots post conversion to the new mount API (in 5.11)..."

Source: https://wiki.samba.org/index.php/LinuxCIFSKernel

Anyway, thanks for the information posted. My alternative will be to keep the 5.17 kernel to guarantee the feature and keep an eye on the test environment until the community releases this kernel permanently on proxmox or debian stable.
 
  • Like
Reactions: racerx and Falk R.
Has anyone else had success with the 5.15 series and single-GPU passthrough?
Yes, with not blacklisting amdgpu (because it is a RX570) and adding video=efifb:off video=vesafb:off video=simplefb:off to the kernel parameters.
EDIT: As of the new kernel today (or maybe a version earlier), I don't even need those kernel parameters anymore.
 
Last edited:
@t.lamprecht I did not do tests with your 5.15 kernel by now, but it seems this fix of the mellanox mlx5 driver would be important to include (the fix is included with Vanilla Kernel 5.15.20):
The underlying issue has been introduced with Vanilla Linux Kernel 5.13.0:
Further details can be found here:
As the Proxmox 5.13 Kernel is affected by this Mellanox issue, too, it would be great if the fix could be backported to Proxmox Kernel 5.13, too.
We will do further tests tomorrow with both 5.15.19 (should have the mellanox issue) and 5.15.20 (should work).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!