Linux Kernel 5.4 for Proxmox VE

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
2,393
365
103
South Tyrol/Italy
shop.maurer-it.com
The upcoming Proxmox VE 6.2 (Q2/2020) will use a 5.4 based Linux kernel. You can test this kernel now, install with:

Code:
apt update && apt install pve-kernel-5.4
This version is a Long Term Support (LTS) kernel and will get updates for the remaining Proxmox VE 6.x based releases.
It's not required to enable the pvetest repository, but it will get you updates to this kernel faster.

We invite you to test your hardware with this kernel and we are thankful for receiving your feedback.
 

sg90

Active Member
Sep 21, 2018
255
36
28
30
Is there any big headliner features that we should look out for / benefit from when coming to PVE? I know I can check the upstream 5.4 changelog, was more if there anything particular from 5.3.x to 5.4.x or just moving to a LTS kernel.
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
2,393
365
103
South Tyrol/Italy
shop.maurer-it.com
- More efficient polling in virtualized guests with haltpoll
- Support for new AMD/Intel HW (graphics and CPU)
- ... It probably is really better you read the changelog yourself than I repeating half those points here ;) https://kernelnewbies.org/Linux_5.4#Coolest_features

The idea for this post is to make people test this kernel, as now it's easier to investigate and report stuff or cherry-pick patches to various upstreams, once released also Ubuntu released and the stable update policies of both of us will mean that you will have to live with a possible issue this kernel brings on your HW possible longer (or us old outdated kernel, also not ideal).
 
Last edited:
  • Like
Reactions: fireon and sg90

kriansa

New Member
Mar 24, 2020
4
1
3
27
Hello,

I'm using a HP Microserver Gen8 and I'm facing this issue when I upgraded to 5.4.

These are the logs I get when I try to start a VM with a passed through device:


Code:
kvm: -device vfio-pci,host=0000:05:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: VFIO_MAP_DMA: -22
kvm: -device vfio-pci,host=0000:05:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio_dma_map(0x7f48cd667c80, 0x0, 0x80000000, 0x7f4645400000) = -22 (Invalid argument)
kvm: -device vfio-pci,host=0000:05:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio 0000:05:00.0: failed to setup container for group 1: memory listener initialization failed for container: Invalid argument
TASK ERROR: start failed: QEMU exited with code 1
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
2,393
365
103
South Tyrol/Italy
shop.maurer-it.com
Hi, it's maybe better to open a new thread for this (add passed through HW info please too).

Two quick pointers which could be at fault:

First guesstimation: some modules related to vfio_* got moved from being a kernel module to get built into the kernel directly, this "breaks" /etc/modprobe.d parameter setting to those module, see: https://forum.proxmox.com/threads/i...nabled-still-error-message.67341/#post-302299

Second guesstimation: memory limits (allthough the new kernel should not have changed that), you could try to increase ulimits for root, execute the following command as a whole and reboot:
Bash:
cat >/etc/security/limits.d/memlock-root-unlimited.conf <<EOF
root    hard    memlock    unlimited
root    soft    memlock    unlimited
EOF
 
  • Like
Reactions: kriansa

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!