Opt-in Linux 5.19 Kernel for Proxmox VE 7.x available

Neobin

Active Member
Apr 29, 2021
679
216
43
I try to install it on my N5105 server , and it crash after booting with this version of 5.19

To take note of:
There are general problems going on with the Intel N5105 (and probably others in that family):
"Main" thread: https://forum.proxmox.com/threads/vm-freezes-irregularly.111494
Bugzilla: https://bugzilla.proxmox.com/show_bug.cgi?id=4188

What I have read so far was, that the majority(/all?) with those CPUs/systems encountered a stable PVE-host, but unstable VMs. So a unstable PVE-host would be something new with this kernel version, I guess? But do not nail me on this.

Anyway, I only wanted to note, that there is more going on with those CPUs.
 

Dark26

Active Member
Nov 27, 2017
231
17
38
45
What crashes (host, VM, ..?) and do you got any specifc error logs that show up before/during crash? Also more details about the HW could be nice to have.

this is the log

the boot start at Sep 19 18:34:07 et crash at Sep 19 18:36:24

in the log we can see , the host try to start container lxc-224 and crash ( 18h34 ), and with kernel 5.15, no problem it start after lxc-175 ( 18h43) arch: amd64 cores: 2 features: nesting=1 hostname: Proxy memory: 256 mp0: NVME-Data:224/vm-224-disk-1.raw,mp=/var/spool/squid/,size=5G nameserver: 172.16.4.254 net0: name=eth0,bridge=vmbr4,gw=172.16.4.254,hwaddr=A2:03:79:B5:3F:B2,ip=172.16.4.222/24,type=veth onboot: 1 ostype: debian protection: 1 rootfs: NVME-Data:224/vm-224-disk-0.raw,size=2G searchdomain: xxxx.fr startup: order=3 swap: 128
 

Attachments

  • messages.txt
    339.5 KB · Views: 5
Last edited:

Dark26

Active Member
Nov 27, 2017
231
17
38
45
To take note of:
There are general problems going on with the Intel N5105 (and probably others in that family):
"Main" thread: https://forum.proxmox.com/threads/vm-freezes-irregularly.111494
Bugzilla: https://bugzilla.proxmox.com/show_bug.cgi?id=4188

What I have read so far was, that the majority(/all?) with those CPUs/systems encountered a stable PVE-host, but unstable VMs. So a unstable PVE-host would be something new with this kernel version, I guess? But do not nail me on this.

Anyway, I only wanted to note, that there is more going on with those CPUs.

i Know, that why i want to try this kernel...
 
  • Like
Reactions: Neobin

llamprec

New Member
Apr 2, 2022
20
2
3
Reading through this thread, I am a little apprehensive with upgrading from kernel 5.15.53-1-pve to 5.19 kernel.
I am running the following CPUs and have experenced issues with VMs hanging when migrating backward.

64 x Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz (2 Sockets)
32 x Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz (2 Sockets)
12 x Intel(R) Xeon(R) CPU E5-2603 v4 @ 1.70GHz (2 Sockets
12 x Intel(R) Xeon(R) CPU E5-2603 v4 @ 1.70GHz (2 Sockets)

I have been suggested to upgrade the kernel to possibly solve my migration issues. Below is the thread that I created. https://forum.proxmox.com/threads/proxmox-ha-cluster-failover-issue.115453/

Thanks in advance for any advice possible.
Regards
Lawrence
 

Neobin

Active Member
Apr 29, 2021
679
216
43
Reading through this thread, I am a little apprehensive with upgrading from kernel 5.15.53-1-pve to 5.19 kernel.
I am running the following CPUs and have experenced issues with VMs hanging when migrating backward.

64 x Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz (2 Sockets)
32 x Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz (2 Sockets)
12 x Intel(R) Xeon(R) CPU E5-2603 v4 @ 1.70GHz (2 Sockets
12 x Intel(R) Xeon(R) CPU E5-2603 v4 @ 1.70GHz (2 Sockets)

I have been suggested to upgrade the kernel to possibly solve my migration issues. Below is the thread that I created. https://forum.proxmox.com/threads/proxmox-ha-cluster-failover-issue.115453/

Thanks in advance for any advice possible.
Regards
Lawrence

What advice you are exactly looking/hoping for?

I mean, the suggestion came from the Proxmox-Staff and in this (short) thread are already two people reporting, that this kernel fixed the migration issues for them:
nice with version 5.19 I can again migrate my VMs between my servers without the VMs crashing
It looks like it has resolved my migration issues to/from an i7-12700K and i7-8700K machine.

What I see, the only two negative reports in this thread so far are one user with GPU-passthrough and lagging games inside a gaming-VM and another user with problems on the Intel N5105, which has general problems, as I mentioned here already in another post.

What I can add in regard of GPU-passthrough and (game-)lagging: I have also such a setup and do not have any problems with it and the 5.19 kernel.

You can ever go back to an older kernel again.
 

llamprec

New Member
Apr 2, 2022
20
2
3
What advice you are exactly looking/hoping for?

I mean, the suggestion came from the Proxmox-Staff and in this (short) thread are already two people reporting, that this kernel fixed the migration issues for them:



What I see, the only two negative reports in this thread so far are one user with GPU-passthrough and lagging games inside a gaming-VM and another user with problems on the Intel N5105, which has general problems, as I mentioned here already in another post.

What I can add in regard of GPU-passthrough and (game-)lagging: I have also such a setup and do not have any problems with it and the 5.19 kernel.

You can ever go back to an older kernel again.
@Neobin

Thank you for your input. I agree with what you are saying but I am not very knowledgeable when it comes to any kernel config or issues.

I also understand that I can install the newer kernel and if I see issues, I can always roll back. I will take this on board and discuss with my manager before moving forward.
Your info is very much appreciated.

Lawrence
 

Daniel Keller

Member
Mar 10, 2015
26
0
21
@Neobin

Thank you for your input. I agree with what you are saying but I am not very knowledgeable when it comes to any kernel config or issues.

I also understand that I can install the newer kernel and if I see issues, I can always roll back. I will take this on board and discuss with my manager before moving forward.
Your info is very much appreciated.

Lawrence
you should have no problem my servers were similarly equipped Intel(R) Xeon(R) Silver 4114 and Intel(R) Xeon(R) CPU E5-2640 v3

and if you encounter problems booting just select the old kernel
 

psyyo

New Member
Aug 10, 2022
2
0
1
twitter.com
Passthrough of two AMD GPU (with vendor-reset), audio, SATA and USB controllers works well on X570S. No more need of initcall_blacklist=sysfb_init work-around for passthrough of boot GPU, which was needed after 5.11.22 (because amdgpu crashes when unloading) until 5.15.35 and after recent (Debian 11.5?) update again.
lm-sensors still does not detect it8628 on X570S AERO G but work-around still works and not related to Proxmox.
I do not notice any regressions but I also don't see the wlan device of the mt7921e (driver in use) but that probably requires kernel 5.19.8 (or 5.15.67).
very nice! seamless upgrade also.
similar x570M Pro4 / Ryzen rig setup here, passing through AMD boot GPU to desktop VM.

are you saying you can shutdown VM and GPU returns to proxmox host now with 5.19?
may have to fiddle with commenting out some switches in my grub if so!
 
Last edited:

leesteken

Famous Member
May 31, 2020
2,047
433
88
very nice! seamless upgrade also.
similar x570M Pro4 / Ryzen rig setup here, passing through AMD boot GPU to desktop VM.

are you saying you can shutdown VM and GPU returns to proxmox host now with 5.19?
may have to fiddle with commenting out some switches in my grub if so!
Yes, that worked again. When amdgpu unloads gracefully it can also rebind, in my experiences. Doing this every time and restarting the VM many times does appears to become unstable eventually (or maybe I need longer pauses in between). In some kernel version this works perfectly, in some version it just doesn't. I can't tell why.
 
  • Like
Reactions: psyyo

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,340
1,645
164
South Tyrol/Italy
shop.proxmox.com
Does this have support for the Intel 12th generation Performance and Efficient cores?
FWIW, they already work out, and are supported just fine with our 5.15 kernel, most relevant stuff got backported, and I'm actually using an i7-12700K with P and E cores since February with the 5.15 kernel as my main workstation without any issues, with quite a lot of usage of VMs and compiling resource hungry stuff like ceph, kernel, qemu, ...

So newer kernel should only improve performance and/or efficiency of scheduling, like the mentioned HFI, but while I did not run specific benchmarks to compare 5.15 and 5.19 kernel, I did not notice any performance change on a more subjective level between the two - but that may be related to the type of my most common workloads (compiling stuff uses all cores to their limit anyway).

Is the 5.18 kernel needed in the VM that has Linux as well or does PVE handle that managing of P an E cores?
No, I ran VMs with way older kernels (e.g. 4.15) on my Alder Lake based workstation just fine.
 
Last edited:
  • Like
Reactions: NE78

McKajVah

Member
Feb 7, 2013
15
1
23
This works really well on my N5105. No problems encountered and hardware transcoding is now working as it should using Quicksync in lowpower mode (Guc/huc) with Jellyfin in a LXC container.
 
Last edited:

miklos_akos

New Member
Feb 7, 2022
2
2
3
22
NVIDIA boot gpu passthrough still fails due to BAR issues.
Specs:
ASRock B550 Pro4
Nvidia GeForce GTX 1050 Ti [ASUS Cerberus GTX 1050 Ti OC 4GB]

Code:
[  117.539136] vfio-pci 0000:06:00.0: BAR 3: can't reserve [mem 0xd0000000-0xd1ffffff 64bit pref]
[  117.539147] vfio-pci 0000:06:00.0: BAR 3: can't reserve [mem 0xd0000000-0xd1ffffff 64bit pref]
 

leesteken

Famous Member
May 31, 2020
2,047
433
88
NVIDIA boot gpu passthrough still fails due to BAR issues.
Specs:
ASRock B550 Pro4
Nvidia GeForce GTX 1050 Ti [ASUS Cerberus GTX 1050 Ti OC 4GB]

Code:
[  117.539136] vfio-pci 0000:06:00.0: BAR 3: can't reserve [mem 0xd0000000-0xd1ffffff 64bit pref]
[  117.539147] vfio-pci 0000:06:00.0: BAR 3: can't reserve [mem 0xd0000000-0xd1ffffff 64bit pref]
Does the initcall_blacklist=sysfb_init kernel parameter work-around still work?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!