Linux Kernel 5.4 for Proxmox VE

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
6,361
3,098
303
South Tyrol/Italy
shop.proxmox.com
The upcoming Proxmox VE 6.2 (Q2/2020) will use a 5.4 based Linux kernel. You can test this kernel now, install with:

Code:
apt update && apt install pve-kernel-5.4

This version is a Long Term Support (LTS) kernel and will get updates for the remaining Proxmox VE 6.x based releases.
It's not required to enable the pvetest repository, but it will get you updates to this kernel faster.

We invite you to test your hardware with this kernel and we are thankful for receiving your feedback.
 
Is there any big headliner features that we should look out for / benefit from when coming to PVE? I know I can check the upstream 5.4 changelog, was more if there anything particular from 5.3.x to 5.4.x or just moving to a LTS kernel.
 
- More efficient polling in virtualized guests with haltpoll
- Support for new AMD/Intel HW (graphics and CPU)
- ... It probably is really better you read the changelog yourself than I repeating half those points here ;) https://kernelnewbies.org/Linux_5.4#Coolest_features

The idea for this post is to make people test this kernel, as now it's easier to investigate and report stuff or cherry-pick patches to various upstreams, once released also Ubuntu released and the stable update policies of both of us will mean that you will have to live with a possible issue this kernel brings on your HW possible longer (or us old outdated kernel, also not ideal).
 
Last edited:
  • Like
Reactions: fireon and sg90
Thanks, this version appears to have cleared my dual amd epyc host of segfaults & general protection faults, to much relief.
 
  • Like
Reactions: t.lamprecht
Hello,

I'm using a HP Microserver Gen8 and I'm facing this issue when I upgraded to 5.4.

These are the logs I get when I try to start a VM with a passed through device:


Code:
kvm: -device vfio-pci,host=0000:05:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: VFIO_MAP_DMA: -22
kvm: -device vfio-pci,host=0000:05:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio_dma_map(0x7f48cd667c80, 0x0, 0x80000000, 0x7f4645400000) = -22 (Invalid argument)
kvm: -device vfio-pci,host=0000:05:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio 0000:05:00.0: failed to setup container for group 1: memory listener initialization failed for container: Invalid argument
TASK ERROR: start failed: QEMU exited with code 1
 
Hi, it's maybe better to open a new thread for this (add passed through HW info please too).

Two quick pointers which could be at fault:

First guesstimation: some modules related to vfio_* got moved from being a kernel module to get built into the kernel directly, this "breaks" /etc/modprobe.d parameter setting to those module, see: https://forum.proxmox.com/threads/i...nabled-still-error-message.67341/#post-302299

Second guesstimation: memory limits (allthough the new kernel should not have changed that), you could try to increase ulimits for root, execute the following command as a whole and reboot:
Bash:
cat >/etc/security/limits.d/memlock-root-unlimited.conf <<EOF
root    hard    memlock    unlimited
root    soft    memlock    unlimited
EOF
 
I wonder if i can use wireguard inside proxmox CTs. Are there some limitations?

Please open anew thread for new topics, thanks.
 
I'm not well versed in kernel development, is there a list somewhere where I could see what new hardware is supported (like the RTL8125 mentioned above)?
 
The Linux Kernel Newbies site has a pretty digestible resume of what work went into drivers, e.g.: the networking driver changes for the 5.4 kernel: https://kernelnewbies.org/Linux_5.4#Networking-1

Note, it can be still a bit much "raw" information, and it's not a 100% complete short log of what happened in a release, as there happens so really really much!
One can also checkout the respective drivers subsystem directory history, e.g., with git log, to get the full truth :)
 
Hello Сolleagues,

In a last version pve-manager/6.1-7/13e58d5e (running kernel: 5.3.18-2-pve)

Exist a problem in the Kernel module with ZFS file system, concretely with RAM limits configuration across zfs tools, this feature doesn't work, I set up limits here /etc/modprobe.d/zfs.conf
Code:
options zfs zfs_arc_min=8589934592
options zfs zfs_arc_max=8589934592

It's a known issue on this forum with RAM limit for ZFS file system with kernel: 5.3.18, I just want get to know before installing it. How this feature works in kernel-5.4?
 
Last edited:
Last edited:
It's a known issue on this forum with RAM limit for ZFS file system with kernel: 5.3.18

Do you have link to the forum and/or some evidence comparing it with earlier kernels which don't have this problem?
 
Do you have link to the forum and/or some evidence comparing it with earlier kernels which don't have this problem?

Of course, I have some evidence comparing with earlier kernels, yesterday I was testing it.

Well, for testing has been used 1unit blade server with DDR4 ECC 512Gb RAM on board, Proxmox VE has been installed on SATA DOM 64Gb, and for storage with VM's I have made ZFS raid strip (Total capacity 1,8Tb).

In both cases has been used same hardware and installation has been made on a bare metal.

ZFS\Zpool configuration is same in both cases also:

zpool.jpg
ssd-rai0.png

By the way, I did make this pool by disk ID, this information for nvme ssd disks you may get to know here ls -l /dev/disk/by-id/
For HDD you may use command: hdparm -i /dev/sdb

Code:
zpool create -f nvme-raid0 nvme-Samsung_SSD_970_PRO_1TB_SERIALNUMBER nvme-Samsung_SSD_970_PRO_1TB_SERIALNUMBER

Take a look on screenshots below:

1. System installed from image Proxmox-ve_6.0-1.iso

6.0-4.png

CLI-6.0-4.jpg


2. System installed from image Proxmox-ve_6.1-2.iso

6.1-7.png

CLI-6.1-7.jpg


In a both cases on the test instances I have made configuration according PVE wiki about ZFS on linux, link

limit_ZFS.png

It's mean, I did apply config according advice from docs

Code:
sudo vi /etc/modprobe.d/zfs.conf

#input this rule and save this new file zfs.conf
options zfs zfs_arc_max=8589934592

update-initramfs -u
reboot

After it via web-interface I did add ZFS storage and began use this PVE and install new virtual machines.

1.
For Proxmox-ve_6.0-4
This Limit for ZFS storage works fine!

2.
For Proxmox-ve_6.1-7
This Limit for ZFS storage doesn't work at all!

modprobe.d/zfs.conf - didn't affecte in PVE 6.1-7 on this values :(

Arcstats by default:
c_min = 15,75GB
c_max = 251,92GB
Code:
cat /proc/spl/kstat/zfs/arcstats | grep c_min
c_min                            4    16906070656
cat /proc/spl/kstat/zfs/arcstats | grep c_max
c_max                           4    270497130496


This single saint tuning /etc/modprobe.d/zfs.conf doesn't work in Proxmox-ve_6.1-7 with kernel 5.3.18-2-pve. For this issue resolving need to do kernel rollback\downgrading to kernel 5.0.15-1-pve.

It also, similar case has been
discussed here, in this thread

message.jpg
 

Attachments

  • CLI 6.0-4.png
    CLI 6.0-4.png
    29.7 KB · Views: 6
  • CLI-6.1-7.jpg
    CLI-6.1-7.jpg
    81 KB · Views: 6
Last edited:
  • Like
Reactions: DerDanilo

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!