Linux Kernel 5.3 for Proxmox VE

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
645
374
83
The upcoming Proxmox VE 6.1 (End of Q4/2019) will use a 5.3 linux kernel. You can test this one already, just enable the pvetest repository on your test system and install with:

Code:
apt update && apt install pve-kernel-5.3
We invite you to test your hardware with this kernel we are thankful for receiving your feedback.

__________________
Best regards,

Martin Maurer
 
  • Like
Reactions: fireon

Dark26

Member
Nov 27, 2017
101
5
18
42
The upcoming Proxmox VE 6.1 (End of Q4/2019) will use a 5.3 linux kernel. You can test this one already, just enable the pvetest repository on your test system and install with:

Code:
apt update && apt install pve-kernel-5.3
We invite you to test your hardware with this kernel we are thankful for receiving your feedback.

__________________
Best regards,

Martin Maurer
Great news. i try right now on my cluster@home.
 

ChrisWorks

New Member
Jul 25, 2019
2
0
1
19
Do you guys have a planned release date for the update? At least a bit more specific maybe?

How safe is it to use the test version? Are we talking about a alpha or more of a beta status?
 

Dark26

Member
Nov 27, 2017
101
5
18
42
Do you guys have a planned release date for the update? At least a bit more specific maybe?

How safe is it to use the test version? Are we talking about a alpha or more of a beta status?
If you don't need the corrections include in it don't install it. For my part the fix for baytrail cpu bug is in it, and no more crash since i have installed it.
 

n1nj4888

Member
Jan 13, 2019
109
2
18
39
I assume it is safe to upgrade one node in a cluster to 5.3 and leave the other nodes on the latest PVE 6.0 kernel (5.0.x)?

Thanks!
 

tom

Proxmox Staff Member
Staff member
Aug 29, 2006
13,809
445
103
Do you guys have a planned release date for the update? At least a bit more specific maybe?
see first post.
How safe is it to use the test version? Are we talking about a alpha or more of a beta status?
the kernel seems to run great, so far no known issues. mixing kernels inside a cluster is possible.
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,705
259
103
South Tyrol/Italy
I assume it is safe to upgrade one node in a cluster to 5.3 and leave the other nodes on the latest PVE 6.0 kernel (5.0.x)?

Thanks!
In this case yes, but we cannot always guarantee that live-migrating from a node using an newer kernel to a node using a older kernel works. Forward compatibility is guaranteed, if not explicitly told otherwise. While we and other upstream projects of ours try to not break backwards compatibility, it can happen or is sometimes unavoidable.

So running mixed kernels in a cluster can be done temporarily and to test things, and all should be well. But it should not be a permanent state, if not really required. We normally do not move things with known grave issues even to pvetest repos, so you can use it just fine - but naturally you can have some issues specific to the hardware and environment of your setup, that's what thi scall for testing is here. To rule such things out and give the new kernel a bigger test surface.
 

ozdjh

New Member
Oct 8, 2019
24
6
3
Hi

Just FYI, we lost our Broadcom based 10GbE NICs (bnx2x) after installing the new kernel. We had to bring up a link on an onboard 1GbE and install the pve-firmware package to get the node back on the network.

Thanks

David
...
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,705
259
103
South Tyrol/Italy
install the pve-firmware package
huh, that one should always be installed? The our meta-packages for kernels like pve-kernel-5.3 and pve-kernel-5.0 both depend on pve-firmware as package.. So if you use those to install kernels (as recommended, else you may not get newer ABI updates) you should have that.
 

ozdjh

New Member
Oct 8, 2019
24
6
3
I've gone back through my scroll buffers and found some of the upgrades. Details from the first node (that failed) are below. The firmware package wasn't updated. I'll include the details from one of the other nodes in another post as it wont let me include them both in the one post (too large).

Code:
root@ed-hv1:~# apt update && apt install pve-kernel-5.3
Hit:2 http://ftp.au.debian.org/debian buster InRelease
Get:3 http://ftp.au.debian.org/debian buster-updates InRelease [49.3 kB]
Get:1 http://security-cdn.debian.org buster/updates InRelease [39.1 kB]      
Hit:4 http://download.proxmox.com/debian/ceph-nautilus buster InRelease
Get:5 http://download.proxmox.com/debian/pve buster InRelease [3,051 B]
Get:6 http://download.proxmox.com/debian/pve buster/pvetest amd64 Packages [122 kB]
Fetched 213 kB in 5s (47.4 kB/s)  
Reading package lists... Done
Building dependency tree      
Reading state information... Done
108 packages can be upgraded. Run 'apt list --upgradable' to see them.
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following additional packages will be installed:
  pve-kernel-5.3.7-1-pve
The following NEW packages will be installed:
  pve-kernel-5.3 pve-kernel-5.3.7-1-pve
0 upgraded, 2 newly installed, 0 to remove and 108 not upgraded.
Need to get 59.5 MB of archives.
After this operation, 284 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://download.proxmox.com/debian/pve buster/pvetest amd64 pve-kernel-5.3.7-1-pve amd64 5.3.7-1 [59.5 MB]
Get:2 http://download.proxmox.com/debian/pve buster/pvetest amd64 pve-kernel-5.3 all 6.0-11 [3,004 B]
Fetched 59.5 MB in 11s (5,198 kB/s)                                          
Selecting previously unselected package pve-kernel-5.3.7-1-pve.
(Reading database ... 46923 files and directories currently installed.)
Preparing to unpack .../pve-kernel-5.3.7-1-pve_5.3.7-1_amd64.deb ...
Unpacking pve-kernel-5.3.7-1-pve (5.3.7-1) ...
Selecting previously unselected package pve-kernel-5.3.
Preparing to unpack .../pve-kernel-5.3_6.0-11_all.deb ...
Unpacking pve-kernel-5.3 (6.0-11) ...
Setting up pve-kernel-5.3.7-1-pve (5.3.7-1) ...
Examining /etc/kernel/postinst.d.
run-parts: executing /etc/kernel/postinst.d/apt-auto-removal 5.3.7-1-pve /boot/vmlinuz-5.3.7-1-pve
run-parts: executing /etc/kernel/postinst.d/initramfs-tools 5.3.7-1-pve /boot/vmlinuz-5.3.7-1-pve
update-initramfs: Generating /boot/initrd.img-5.3.7-1-pve
W: Possible missing firmware /lib/firmware/bnx2x/bnx2x-e2-7.13.11.0.fw for module bnx2x
W: Possible missing firmware /lib/firmware/bnx2x/bnx2x-e1h-7.13.11.0.fw for module bnx2x
W: Possible missing firmware /lib/firmware/bnx2x/bnx2x-e1-7.13.11.0.fw for module bnx2x
run-parts: executing /etc/kernel/postinst.d/pve-auto-removal 5.3.7-1-pve /boot/vmlinuz-5.3.7-1-pve
run-parts: executing /etc/kernel/postinst.d/zz-pve-efiboot 5.3.7-1-pve /boot/vmlinuz-5.3.7-1-pve
Re-executing '/etc/kernel/postinst.d/zz-pve-efiboot' in new private mount namespace..
No /etc/kernel/pve-efiboot-uuids found, skipping ESP sync.
run-parts: executing /etc/kernel/postinst.d/zz-update-grub 5.3.7-1-pve /boot/vmlinuz-5.3.7-1-pve
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.3.7-1-pve
Found initrd image: /boot/initrd.img-5.3.7-1-pve
Found linux image: /boot/vmlinuz-5.0.15-1-pve
Found initrd image: /boot/initrd.img-5.0.15-1-pve
Found memtest86+ image: /boot/memtest86+.bin
Found memtest86+ multiboot image: /boot/memtest86+_multiboot.bin
done
Setting up pve-kernel-5.3 (6.0-11) ...

root@ed-hv1:~#
 

ozdjh

New Member
Oct 8, 2019
24
6
3
On the other nodes in the cluster I did

Code:
root@ed-hv4:~# apt update && apt install pve-kernel-5.3 pve-firmware
Hit:2 http://ftp.au.debian.org/debian buster InRelease
Get:3 http://ftp.au.debian.org/debian buster-updates InRelease [49.3 kB]
Get:1 http://security-cdn.debian.org buster/updates InRelease [39.1 kB]         
Hit:4 http://download.proxmox.com/debian/ceph-nautilus buster InRelease
Get:5 http://download.proxmox.com/debian/pve buster InRelease [3,051 B]
Get:6 http://download.proxmox.com/debian/pve buster/pvetest amd64 Packages [122 kB]
Fetched 213 kB in 3s (85.2 kB/s)   
Reading package lists... Done
Building dependency tree       
Reading state information... Done
108 packages can be upgraded. Run 'apt list --upgradable' to see them.
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  pve-kernel-5.3.7-1-pve
The following NEW packages will be installed:
  pve-kernel-5.3 pve-kernel-5.3.7-1-pve
The following packages will be upgraded:
  pve-firmware
1 upgraded, 2 newly installed, 0 to remove and 107 not upgraded.
Need to get 103 MB of archives.
After this operation, 269 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://download.proxmox.com/debian/pve buster/pvetest amd64 pve-firmware all 3.0-4 [43.8 MB]
Get:2 http://download.proxmox.com/debian/pve buster/pvetest amd64 pve-kernel-5.3.7-1-pve amd64 5.3.7-1 [59.5 MB]                                   
Get:3 http://download.proxmox.com/debian/pve buster/pvetest amd64 pve-kernel-5.3 all 6.0-11 [3,004 B]                                              
Fetched 103 MB in 19s (5,539 kB/s)                                                                                                                 
Reading changelogs... Done
(Reading database ... 45688 files and directories currently installed.)
Preparing to unpack .../pve-firmware_3.0-4_all.deb ...
Unpacking pve-firmware (3.0-4) over (3.0-2) ...
Selecting previously unselected package pve-kernel-5.3.7-1-pve.
Preparing to unpack .../pve-kernel-5.3.7-1-pve_5.3.7-1_amd64.deb ...
Unpacking pve-kernel-5.3.7-1-pve (5.3.7-1) ...
Selecting previously unselected package pve-kernel-5.3.
Preparing to unpack .../pve-kernel-5.3_6.0-11_all.deb ...
Unpacking pve-kernel-5.3 (6.0-11) ...
Setting up pve-kernel-5.3.7-1-pve (5.3.7-1) ...
Examining /etc/kernel/postinst.d.
run-parts: executing /etc/kernel/postinst.d/apt-auto-removal 5.3.7-1-pve /boot/vmlinuz-5.3.7-1-pve
run-parts: executing /etc/kernel/postinst.d/initramfs-tools 5.3.7-1-pve /boot/vmlinuz-5.3.7-1-pve
update-initramfs: Generating /boot/initrd.img-5.3.7-1-pve
run-parts: executing /etc/kernel/postinst.d/pve-auto-removal 5.3.7-1-pve /boot/vmlinuz-5.3.7-1-pve
run-parts: executing /etc/kernel/postinst.d/zz-pve-efiboot 5.3.7-1-pve /boot/vmlinuz-5.3.7-1-pve
Re-executing '/etc/kernel/postinst.d/zz-pve-efiboot' in new private mount namespace..
No /etc/kernel/pve-efiboot-uuids found, skipping ESP sync.
run-parts: executing /etc/kernel/postinst.d/zz-update-grub 5.3.7-1-pve /boot/vmlinuz-5.3.7-1-pve
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.3.7-1-pve
Found initrd image: /boot/initrd.img-5.3.7-1-pve
Found linux image: /boot/vmlinuz-5.0.15-1-pve
Found initrd image: /boot/initrd.img-5.0.15-1-pve
Found memtest86+ image: /boot/memtest86+.bin
Found memtest86+ multiboot image: /boot/memtest86+_multiboot.bin
done
Setting up pve-firmware (3.0-4) ...
Setting up pve-kernel-5.3 (6.0-11) ...
root@ed-hv4:~#
This upgraded the firmware package and all was fine.
 

ozdjh

New Member
Oct 8, 2019
24
6
3
I followed the instructions that were provided in the first post of this thread. I don't see the recommendation you mention in that post.


David
...
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,705
259
103
South Tyrol/Italy
I followed the instructions that were provided in the first post of this thread. I don't see the recommendation you mention in that post.
No, but you have really a lot of out of date packages:

108 packages can be upgraded. Run 'apt list --upgradable' to see them.
Probably also a newer pve-firmware package including the respecitve NIC FW, was one of them...
Regularly update your system, this can be done over the Webinterface (Node → Updates) or on CLI with
Code:
apt update
apt full-upgrade
(or apt dist upgrade, or pveupgrade)
 

ozdjh

New Member
Oct 8, 2019
24
6
3
Hi

That's interesting. These nodes were only reinstalled a few days ago. I expected they'd be up to date so perhaps those new packages were available from the test repo I just enabled.

The boxes were working fine so the new firmware was a requirement of the new kernel. Surely apt would pull that in as a dependency? (we're coming to debian from an rhel environment so aren't too familiar with the capabilities of apt vs yum etc). If apt doesn't handle kernel dependencies unless the entire system is updated we'll have to take note of that and ensure we include the firmware package in any update so we don't take the systems offline again.


David
...
 

bogo22

Member
Nov 4, 2016
45
0
11
Great news for https://forum.proxmox.com/threads/linux-kernel-5-1.56390/
And also because I sometimes had network interface issues " e1000e 0000:00:1f.6 eno1: Detected Hardware Unit Hang"

So far so good on my Intel NUC NUC8i7BEH2
Not a datacenter though :)
Do you don't get the hangs anymore with kernel 5.3?
Me and some other Proxmox users (still on kernel kernel 5.0.21-4-pve) are affected by that bug too... see my post or other posts. I even tried Intels driver v3.6.0 but I get the same network issues with kernel-tree driver 3.2.6-k
 
Last edited:

bogo22

Member
Nov 4, 2016
45
0
11
FYI: Just upgraded to kernel 5.3 (pvetest) to try if the ethernet unit hangs (see above) are solved with a new kernel and got this call trace during boot:

Code:
[    6.640925] Adding 4194300k swap on /dev/zd0.  Priority:-2 extents:1 across:4194300k SSFS
[    6.720569] [drm] failed to retrieve link info, disabling eDP
[    6.740964] [drm] Initialized i915 1.6.0 20190619 for 0000:00:02.0 on minor 0
[    6.743164] ACPI: Video Device [GFX0] (multi-head: yes  rom: no  post: no)
[    6.743493] input: Video Bus as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/LNXVIDEO:00/input/input4
[    6.743630] snd_hda_intel 0000:00:1f.3: bound 0000:00:02.0 (ops i915_audio_component_bind_ops [i915])
[    6.780485] [drm] Cannot find any crtc or sizes
[    6.801132] ------------[ cut here ]------------
[    6.801133] General protection fault in user access. Non-canonical address?
[    6.801139] WARNING: CPU: 2 PID: 982 at arch/x86/mm/extable.c:126 ex_handler_uaccess+0x52/0x60
[    6.801140] Modules linked in: snd_hda_codec_generic ledtrig_audio intel_rapl_msr intel_rapl_common x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel sof_pci_dev snd_sof_intel_hda_common snd_sof_intel_hda snd_sof_intel_byt snd_sof_intel_ipc mei_hdcp snd_sof aesni_intel snd_sof_xtensa_dsp snd_soc_skl snd_soc_hdac_hda snd_hda_ext_core snd_soc_skl_ipc i915 snd_soc_sst_ipc snd_soc_sst_dsp aes_x86_64 crypto_simd snd_soc_acpi_intel_match snd_soc_acpi cryptd wmi_bmof pcspkr mei_me snd_soc_core glue_helper snd_compress ac97_bus snd_pcm_dmaengine intel_cstate intel_rapl_perf drm_kms_helper snd_hda_intel drm snd_hda_codec snd_hda_core snd_hwdep intel_wmi_thunderbolt snd_pcm i2c_algo_bit snd_timer fb_sys_fops syscopyarea snd sysfillrect sysimgblt mei soundcore intel_pch_thermal mac_hid acpi_pad acpi_tad vhost_net vhost tap ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi sunrpc ip_tables
[    6.801153]  x_tables autofs4 zfs(PO) zunicode(PO) zlua(PO) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) btrfs xor zstd_compress raid6_pq libcrc32c uas ahci usb_storage libahci e1000e i2c_i801 wmi pinctrl_cannonlake video pinctrl_intel
[    6.801159] CPU: 2 PID: 982 Comm: kworker/u8:3 Tainted: P           O      5.3.7-1-pve #1
[    6.801160] Hardware name: Intel(R) Client Systems NUC8i3BEH/NUC8BEB, BIOS BECFL357.86A.0071.2019.0510.1505 05/10/2019
[    6.801161] RIP: 0010:ex_handler_uaccess+0x52/0x60
[    6.801162] Code: c4 08 b8 01 00 00 00 5b 5d c3 80 3d 04 d6 78 01 00 75 db 48 c7 c7 60 0a b4 b5 48 89 75 f0 c6 05 f0 d5 78 01 01 e8 af a1 01 00 <0f> 0b 48 8b 75 f0 eb bc 66 0f 1f 44 00 00 0f 1f 44 00 00 55 80 3d
[    6.801163] RSP: 0018:ffffbe37c89b7cc0 EFLAGS: 00010282
[    6.801163] RAX: 0000000000000000 RBX: ffffffffb56023f4 RCX: 0000000000000000
[    6.801164] RDX: 000000000000003f RSI: ffffffffb6383f7f RDI: 0000000000000246
[    6.801164] RBP: ffffbe37c89b7cd0 R08: ffffffffb6383f40 R09: 0000000000029fc0
[    6.801165] R10: 0000001479b402bb R11: ffffffffb6383f40 R12: 000000000000000d
[    6.801165] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[    6.801166] FS:  0000000000000000(0000) GS:ffff9b922db00000(0000) knlGS:0000000000000000
[    6.801167] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[    6.801167] CR2: 0000564640134058 CR3: 0000000466a84002 CR4: 00000000003606e0
[    6.801168] Call Trace:
[    6.801171]  fixup_exception+0x4a/0x61
[    6.801173]  do_general_protection+0x4e/0x150
[    6.801175]  general_protection+0x28/0x30
[    6.801177] RIP: 0010:strnlen_user+0x4c/0x110
[    6.801177] Code: f8 0f 86 e1 00 00 00 48 29 f8 45 31 c9 0f 01 cb 0f ae e8 48 39 c6 49 89 fa 48 0f 46 c6 41 83 e2 07 48 83 e7 f8 31 c9 4c 01 d0 <4c> 8b 1f 85 c9 0f 85 96 00 00 00 42 8d 0c d5 00 00 00 00 41 b8 01
[    6.801178] RSP: 0018:ffffbe37c89b7de8 EFLAGS: 00050206
[    6.801179] RAX: 0000000000020000 RBX: da1abcdb65667500 RCX: 0000000000000000
[    6.801179] RDX: da1abcdb65667500 RSI: 0000000000020000 RDI: da1abcdb65667500
[    6.801179] RBP: ffffbe37c89b7df8 R08: 8080808080808080 R09: 0000000000000000
[    6.801180] R10: 0000000000000000 R11: 0000000000000000 R12: 00007fffffffefe7
[    6.801180] R13: ffff9b91710b3fe7 R14: 0000000000000000 R15: fffffa9d8ec42cc0
[    6.801182]  ? _copy_from_user+0x3e/0x60
[    6.801184]  copy_strings.isra.35+0x92/0x380
[    6.801185]  __do_execve_file.isra.42+0x5b5/0x9d0
[    6.801187]  ? kmem_cache_alloc+0x120/0x220
[    6.801188]  do_execve+0x25/0x30
[    6.801190]  call_usermodehelper_exec_async+0x188/0x1b0
[    6.801190]  ? call_usermodehelper+0xb0/0xb0
[    6.801192]  ret_from_fork+0x35/0x40
[    6.801193] ---[ end trace 51daeb09aa66cf2d ]---
[    6.809355] snd_hda_codec_realtek hdaudioC0D0: autoconfig for ALC233: line_outs=1 (0x21/0x0/0x0/0x0/0x0) type:hp
[    6.809356] snd_hda_codec_realtek hdaudioC0D0:    speaker_outs=0 (0x0/0x0/0x0/0x0/0x0)
[    6.809357] snd_hda_codec_realtek hdaudioC0D0:    hp_outs=0 (0x0/0x0/0x0/0x0/0x0)
[    6.809358] snd_hda_codec_realtek hdaudioC0D0:    mono: mono_out=0x0
[    6.809358] snd_hda_codec_realtek hdaudioC0D0:    inputs:
[    6.809359] snd_hda_codec_realtek hdaudioC0D0:      Mic=0x19
[    6.809360] snd_hda_codec_realtek hdaudioC0D0:      Internal Mic=0x12
[    6.826495] [drm] Cannot find any crtc or sizes
[    6.864901] [drm] Cannot find any crtc or sizes
[    6.873374] input: HDA Intel PCH Mic as /devices/pci0000:00/0000:00:1f.3/sound/card0/input5
[    6.873428] input: HDA Intel PCH Front Headphone as /devices/pci0000:00/0000:00:1f.3/sound/card0/input6
[    6.873473] input: HDA Intel PCH HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:1f.3/sound/card0/input7
[    6.873515] input: HDA Intel PCH HDMI/DP,pcm=7 as /devices/pci0000:00/0000:00:1f.3/sound/card0/input8
[    6.874521] input: HDA Intel PCH HDMI/DP,pcm=8 as /devices/pci0000:00/0000:00:1f.3/sound/card0/input9
[    6.874569] input: HDA Intel PCH HDMI/DP,pcm=9 as /devices/pci0000:00/0000:00:1f.3/sound/card0/input10
[    6.874625] input: HDA Intel PCH HDMI/DP,pcm=10 as /devices/pci0000:00/0000:00:1f.3/sound/card0/input11
[    6.973739]  zd32: p1 p2 p3 p4
[    6.975605]  zd48: p1
[    6.982756]  zd80: p1 p2 p3
First entry on google regarding General protection fault in user access. Non-canonical address? is zfs issue #9417. I had to set zfs_vdev_scheduler="none" to solve the issue.

Regarding the ethernet unit hangs: With kernel 5.3 it seems more stable (12h+ without issue but after a second reboot and heavy throughput on that NIC I still got the unit hangs.
 
Last edited:

rkk2025

New Member
Jul 11, 2018
25
1
3
28
Hi @martin,
Does the 5.3 test kernel come also with kernel-sources? Or is there some alternative way to run DKMS to install Wireguard on that kernel?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!