Linux Kernel 5.3 for Proxmox VE

I mean, the "pve-kernel-5.3" package is just a meta package intended to pull in the newest real kernel, but the "Depends" really should pull it in.

Is the "real" kernel installed:
Code:
dpkg -s pve-kernel-5.3.10-1-pve
(check the second line "Status")

How did you rolled back to 5.0?

dpkg -s pve-kernel-5.3.10-1-pve
Package: pve-kernel-5.3.10-1-pve
Status: install ok installed
Priority: optional
Section: admin
Installed-Size: 277054
Maintainer: Proxmox Support Team <support@proxmox.com>
Architecture: amd64
Source: pve-kernel
Version: 5.3.10-1
Provides: linux-image, linux-image-2.6
Depends: busybox, initramfs-tools
Recommends: grub-pc | grub-efi-amd64 | grub-efi-ia32 | grub-efi-arm64
Suggests: pve-firmware
Description: The Proxmox PVE Kernel Image
This package contains the linux kernel and initial ramdisk used for booting


Just updated the grub config, the other server I tested it on is fine, but this one the kernel files don't exist even after it says it installed it..
 
Just updated the grub config, the other server I tested it on is fine, but this one the kernel files don't exist even after it says it installed it..

It isn't, by any chance, a system with ZFS as root and using UEFI?

And dpkg says the file are there:
Code:
# dpkg -L pve-kernel-5.3.10-1-pve|grep boot
/boot
/boot/System.map-5.3.10-1-pve
/boot/config-5.3.10-1-pve
/boot/vmlinuz-5.3.10-1-pve
...
 
It isn't, by any chance, a system with ZFS as root and using UEFI?

And dpkg says the file are there:
Code:
# dpkg -L pve-kernel-5.3.10-1-pve|grep boot
/boot
/boot/System.map-5.3.10-1-pve
/boot/config-5.3.10-1-pve
/boot/vmlinuz-5.3.10-1-pve
...

Nope no ZFS or UEFI.

Code:
dpkg -L pve-kernel-5.3.10-1-pve|grep boot
/boot
/boot/System.map-5.3.10-1-pve
/boot/config-5.3.10-1-pve
/boot/vmlinuz-5.3.10-1-pve
/lib/modules/5.3.10-1-pve/kernel/drivers/mtd/parsers/redboot.ko
/lib/modules/5.3.10-1-pve/kernel/drivers/scsi/iscsi_boot_sysfs.ko
/lib/modules/5.3.10-1-pve/kernel/drivers/staging/greybus/gb-bootrom.ko
/lib/modules/5.3.10-1-pve/kernel/drivers/staging/gs_fpgaboot
/lib/modules/5.3.10-1-pve/kernel/drivers/staging/gs_fpgaboot/gs_fpga.ko

However :
Code:
ls -lsh /boot
total 99M
224K -rw-r--r-- 1 root root 219K Aug  8 09:05 config-5.0.18-1-pve
224K -rw-r--r-- 1 root root 219K Nov 13 08:27 config-5.0.21-5-pve
4.0K drwxr-xr-x 5 root root 4.0K Dec  5 06:47 grub
43M -rw-r--r-- 1 root root  42M Aug 24 17:49 initrd.img-5.0.18-1-pve
40M -rw-r--r-- 1 root root  40M Nov 24 04:50 initrd.img-5.0.21-5-pve
16K drwx------ 2 root root  16K Feb 26  2019 lost+found
4.0K drwxr-xr-x 2 root root 4.0K Dec  5 06:48 pve
4.3M -rw-r--r-- 1 root root 4.3M Aug  8 09:05 System.map-5.0.18-1-pve
4.4M -rw-r--r-- 1 root root 4.4M Nov 13 08:27 System.map-5.0.21-5-pve
8.5M -rw-r--r-- 1 root root 8.5M Aug  8 09:05 vmlinuz-5.0.18-1-pve
Am I safe to just copy these files from the other node and update-grub?

Normally I would just purge the package and reinstall, but doing a purge of the kernel wants to remove all PVE packages.
 
Am I safe to just copy these files from the other node and update-grub?

Yeah, I mean it's the same kernel.. but modules would need to be copied over too, not only the /boot stuff.

Did you also tried to re-install just the kernel:
Code:
apt install --reinstall pve-kernel-5.3.10-1-pve
 
Yeah, I mean it's the same kernel.. but modules would need to be copied over too, not only the /boot stuff.

Did you also tried to re-install just the kernel:
Code:
apt install --reinstall pve-kernel-5.3.10-1-pve

Perfect that did the job!

Thanks
 
Hello, is Rome EDAC different to Matisse ? I've the same error using X470D4U + R5 3600, on proxmox 6.1, kernel 5.3.10-1-pve.

Hi there
I'm encountering the same EDAC errors under R7 3700X; using a 5.4 kernel seems to fix these errors. Are there any plans for Proxmox to introduce a 5.4 kernel in the near future?

Thanks

mischa
 
Proxmox usually follows Ubuntu kernel, so i guess we will have next kernel in march?
Maybe there is a procedure to manually compile kernel with proxmox flavor?
 
  • Like
Reactions: Claoudj
e1000 hang is not fixed.

Don't know about the proxmox kernel, but for the vanilla linux kernel the e1000e driver actually was broken in v5.3 in a way it was not before:
see bug https://bugzilla.kernel.org/show_bug.cgi?id=205047

EDIT: My exact problem is "Detected Hardware Unit Hang / Reset adapter unexpectedly" in dmesg just like in the dmesg log from bogo22 in the post below.
EDIT2: "ethtool -K [my-ethernet-interface] tso off" has solved it for me
 
Last edited:
e1000 hang is not fixed.

I can confirm that too - I also get hangs (see below).
I am running proxmox-ve: 6.1-2 (running kernel: 5.3.13-1-pve).
My original issue/thread-ticket.

dmesg:
Code:
[1170118.808120] e1000e 0000:00:1f.6 eno1: Detected Hardware Unit Hang:
                   TDH                  <95>
                   TDT                  <c>
                   next_to_use          <c>
                   next_to_clean        <94>
                 buffer_info[next_to_clean]:
                   time_stamp           <1116e9435>
                   next_to_watch        <95>
                   jiffies              <1116e95c8>
                   next_to_watch.status <0>
                 MAC Status             <40080083>
                 PHY Status             <796d>
                 PHY 1000BASE-T Status  <3800>
                 PHY Extended Status    <3000>
                 PCI Status             <10>
[1170120.824095] e1000e 0000:00:1f.6 eno1: Detected Hardware Unit Hang:
                   TDH                  <95>
                   TDT                  <c>
                   next_to_use          <c>
                   next_to_clean        <94>
                 buffer_info[next_to_clean]:
                   time_stamp           <1116e9435>
                   next_to_watch        <95>
                   jiffies              <1116e97c0>
                   next_to_watch.status <0>
                 MAC Status             <40080083>
                 PHY Status             <796d>
                 PHY 1000BASE-T Status  <3800>
                 PHY Extended Status    <3000>
                 PCI Status             <10>
[1170122.840015] e1000e 0000:00:1f.6 eno1: Detected Hardware Unit Hang:
                   TDH                  <95>
                   TDT                  <c>
                   next_to_use          <c>
                   next_to_clean        <94>
                 buffer_info[next_to_clean]:
                   time_stamp           <1116e9435>
                   next_to_watch        <95>
                   jiffies              <1116e99b8>
                   next_to_watch.status <0>
                 MAC Status             <40080083>
                 PHY Status             <796d>
                 PHY 1000BASE-T Status  <3800>
                 PHY Extended Status    <3000>
                 PCI Status             <10>
[1170124.856021] e1000e 0000:00:1f.6 eno1: Detected Hardware Unit Hang:
                   TDH                  <95>
                   TDT                  <c>
                   next_to_use          <c>
                   next_to_clean        <94>
                 buffer_info[next_to_clean]:
                   time_stamp           <1116e9435>
                   next_to_watch        <95>
                   jiffies              <1116e9bb0>
                   next_to_watch.status <0>
                 MAC Status             <40080083>
                 PHY Status             <796d>
                 PHY 1000BASE-T Status  <3800>
                 PHY Extended Status    <3000>
                 PCI Status             <10>
[1170126.711751] e1000e 0000:00:1f.6 eno1: Reset adapter unexpectedly
[1170126.711936] vmbr0: port 1(eno1) entered disabled state
[1170132.904123] e1000e: eno1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[1170132.904225] vmbr0: port 1(eno1) entered blocking state
[1170132.904230] vmbr0: port 1(eno1) entered forwarding state
 
For those running a NUC have you witnessed some cpu frequency related issues?
To avoid having the thing "run" at full pace, I'm using the powersave governor, and recently updated to 5.3.13, 5.3.18 and 5.4.24 but all of these kernel do not seem to respect the instruction (pstate issue?).
The only kernel on which it seems to work is 5.3.7-1-pve

Basically the behavior is that cpu run full speed, everything gets hot (100°C).

Reverted to 5.3.7-1-pve for the moment
 
I’ve got 5.4 running on one NUC8i5BEH and 5.3 on another and haven’t noticed any such issue with 5.4. But perhaps I’m not looking in the right place...

Is there a command you run to identify the issue and perhaps I can run that here also?
 
OK the problem seems to be elsewhere, without actually seeing cpu load or network load, the syslog server running on the NUC was being hit by a vm sending thousands and thousands of logs... seems stable now. Odd that I didn't see any other side effect than CPU frequency increase.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!