Linux Kernel 5.4 for Proxmox VE

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,156
1,482
164
South Tyrol/Italy
shop.proxmox.com
This single saint tuning /etc/modprobe.d/zfs.conf doesn't work in Proxmox-ve_6.1-2 with kernel 5.3.18-2-pve. For this issue resolving need to do kernel rollback\downgrading to kernel 5.0.15-1-pve.


It works here:

Code:
root@prod2:~# cat /sys/module/zfs/parameters/zfs_arc_max
0
root@prod2:~# uname -a
Linux prod2 5.3.18-3-pve #1 SMP PVE 5.3.18-3 (Tue, 17 Mar 2020 16:33:19 +0100) x86_64 GNU/Linux
root@prod2:~# grep c_max /proc/spl/kstat/zfs/arcstats
c_max                           4    16780830720
root@prod2:~# echo 'options zfs zfs_arc_max=8589934592' > /etc/modprobe.d/zfs.conf
root@prod2:~# update-initramfs -u
update-initramfs: Generating /boot/initrd.img-5.3.18-3-pve
Running hook script 'zz-pve-efiboot'..
Re-executing '/etc/kernel/postinst.d/zz-pve-efiboot' in new private mount namespace..
No /etc/kernel/pve-efiboot-uuids found, skipping ESP sync.
root@prod2:~# reboot 

...

# ssh root@192.168.30.65
Linux prod2 5.3.18-3-pve #1 SMP PVE 5.3.18-3 (Tue, 17 Mar 2020 16:33:19 +0100) x86_64
    [ --8< -- snip --8<-- ]
Last login: Thu Apr  9 18:08:38 2020 from 192.168.16.38
root@prod2:~# grep c_max /proc/spl/kstat/zfs/arcstats
c_max                           4    8589934592
root@prod2:~# cat /sys/module/zfs/parameters/zfs_arc_max
8589934592
root@prod2:~#

As screenshots seem to be more proof worthy..:

scrot-zfs-arc-max-set.png

Seems to work fine..

So please open a new thread and specify what you changed in your setup that makes it act like this, as it def. works on a default setup from our installer here, more than one re-checked, so if you really do not forgot a step then it has to be something out of the ordinary to not work.. You can use cat /sys/module/zfs/parameters/zfs_arc_max to check if the module parameter is actually in effect.
 
  • Like
Reactions: DerDanilo

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,156
1,482
164
South Tyrol/Italy
shop.proxmox.com
I think native Wireguad support will be in Kernel 5.6 ... not before :rolleyes:
Yes, as said 5.4 won't have it natively, but using wireguard-dkms package from buster-backports works great; and as that is more or less the same code from the same author(s), so same experience.
 
  • Like
Reactions: Didier Misson

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,156
1,482
164
South Tyrol/Italy
shop.proxmox.com

moxfan

Active Member
Aug 28, 2013
104
5
38
This works just fine here on both, 5.4 and 5.3 based kernels, actually I just successfully retested here.. Maybe you just forgot to update the initramfs:
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#_limit_zfs_memory_usage
Quick feedback: Your method to limit ZFS's memory usage works fine here with the 5.3 kernel.
Just 2 things I am not sure about.
The amount of ARC memory being used for a container, is that included in that container's and the host's (h)top command?
Also, is there a command to reliably find out the ARC mem used for each container?
 
Last edited:

morph027

Well-Known Member
Mar 22, 2013
446
60
48
Leipzig
morph027.gitlab.io
Also running cluster of 3 mixed nodes (ZFS) without any problems since the first release.

- Intel(R) Xeon(R) CPU E3-1230 v3 w/ 32GB RAM
- Intel(R) Xeon(R) CPU E3-1240 v5 w/ 64GB RAM
- AMD EPYC 7351P 16-Core Processor w/ 128GB RAM
 
  • Like
Reactions: t.lamprecht

UniverseX

Member
Mar 10, 2019
19
0
6
Is there any plans to add support for VirtIO-FS in Proxmox VE 6.2? It's been added to Linux 5.4 and now also to QEMU 5.0.
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,156
1,482
164
South Tyrol/Italy
shop.proxmox.com
Is there any plans to add support for VirtIO-FS in Proxmox VE 6.2? It's been added to Linux 5.4 and now also to QEMU 5.0.
No, we will look at that at a later point, but not for Proxmox VE 6.2. Note here also that QEMU 5.0 is still not final released (it will maybe today, though ^^) and that the guest kernel needs to support this too, IIRC.
 
Feb 19, 2019
44
12
13
Has anyone used the 5.4 kernel for in-tree support for the *new* Intel x710 NICs released 2019 using the i40e module? I had to build the module from the intel source package for pve-kernel-5.3 as it does not support these new cards (https://ark.intel.com/content/www/u.../intel-ethernet-network-adapter-x710-t2l.html for example).

There are things that seem to indicate drivers are included and working but some confirmed uses would be nice.
 

Mr.Goodcat

Member
Feb 8, 2020
12
1
8
35
What's the general policy/roadmap on kernel updates? It appears as if Ubuntu kernels are introduced until Proxmox x.2. The next update after that will become available with the upcoming mayor release (i.e. PVE 7.0)?
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,156
1,482
164
South Tyrol/Italy
shop.proxmox.com
What's the general policy/roadmap on kernel updates? It appears as if Ubuntu kernels are introduced until Proxmox x.2. The next update after that will become available with the upcoming mayor release (i.e. PVE 7.0)?

Yes, it doesn't have to be .2 but it normally falls together. We introduce a major version with a recent kernel, continue to upgrade to always stay on a supported (upstream wise) kernel until we get to the next LTS release, ideally both from the real upstream (kernel.org) and from Ubuntu, as then we know that the version will get the best support for older issues but also newer hardware. (4.15 was an exception due to the intel meltdown/specter mess then happening).
 
Last edited:
  • Like
Reactions: Mr.Goodcat

kkasberg

Member
Oct 7, 2015
6
0
21
I get

Code:
Unable to locate package pve-kernel-5.4

when trying to install kernel 5.4 using the command in the first post.
 

BobMccapherey

Member
Apr 25, 2020
33
0
6
40
I'm hoping this gets rid of problems I've been having with the most recent 5.3 kernel and random page faults arising from the use of Intel GVT-g on a Coffee Lake ER Xeon processor.
 

onepamopa

Member
Dec 1, 2019
83
11
13
36
Well, looks like mine are "hard" lockups ... everything freezes, and not a single line @ any of the logs..,,
I've enabled crash dumps.. hopefully at least they will provide something..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!