Opt-in Linux 6.1 Kernel for Proxmox VE 7.x available

any updates on the freezing issues, i opted in to 6.1.15-1-pve from 5.15 with the freezeing issue while migrating AMD->AMD and the issue still persists i did not try 5.13 or 5.19 prior. i just noticed that if i reset after the freeze and migrate back from AMD Ryzen Threadripper 2950X -> AMD Ryzen Threadripper 3960X its OK, so i only have a one-way issue.

on the host with the freezing migration i see following log entry 2x each time i migrated before the client is unresponsive
kernel: [15438.851034] Disabled LAPIC found during irq injection
3 seconds before log entry
qmp command 'guest-ping' failed - got timeout
 
Last edited:
It is already compiled in, as you can see in the Kconfig used for compilation, i.e., in /boot/config-6.1.0-1-pve.
It's just not enabled by default - as it's very new and can make things also worse in some situations.

You can just enable it yourself if you want to use it, from your linked docs:
echo y >/sys/kernel/mm/lru_gen/enabled

Any plans to enable MGLRU?
Does it offer benefits in proxmox (hypervisor) usecase?

Can it be enabled in LXC?
 
Any plans to enable MGLRU?
It's already built-in, so anybody can enable it already themselves if they want.

We won't enable it as default for the Proxmox VE 7.X series; but for the future PVE 8.x series there's nothing concrete planned, if we'd like to do a few specific benchmarks first and also wait until development speed of this feature calmed down a bit.

That said, if MGLRU is deemed mature and stable enough to be enabled as default by upstream, we won't disable it again in our downstream kernel.
Does it offer benefits in proxmox (hypervisor) usecase?
Well, it seems some applications indeed have improved performance with the MGLRU, and as Proxmox VE allows one to run almost any application, Containerized or Virtualized, I think that there are certainly use case where a setup running Proxmox VE profits from enabling this.
Can it be enabled in LXC?
Container share the host kernel, so if it's enabled there, it's enabled for all processes, also those running in Containers.
 
Is there a tutorial for this MGLRU?
For testing it you can just follow what I already wrote in this thread:
You can just enable it yourself if you want to use it, from your linked docs execute the following in a shell as root:
echo y >/sys/kernel/mm/lru_gen/enabled
To make it permanent (across reboots) you could install "sysfsutils", e.g., via apt install sysfsutils and then create a configuration file in "/etc/sysfs.d/", for example by executing the following as root:

Code:
echo "kernel/mm/lru_gen/enabled = y"  >/etc/sysfs.d/mglru.conf

The MGLRU should then be enabled with every boot.
For temporarily disabling MGLRU again you can write "n" to the /sys/ path, and for permanently disabling it remove the .conf in addition (or reboot).
 
  • Like
Reactions: hexx and alsicorp
t.lamprecht Thanks for the info. I will try the 6.2 kernel again. Just wondering if Ubuntu or Proxmox will be back porting MGLRU to the 5.15 kernels? Seems it is not in 5.15.107-2.
Code:
echo y >/sys/kernel/mm/lru_gen/enabled
-bash: /sys/kernel/mm/lru_gen/enabled: No such file or directory
 
Just wondering if Ubuntu or Proxmox will be back porting MGLRU to the 5.15 kernels? Seems it is not in 5.15.107-2.
No, definitively not. MGLRU is not a trivial change and requires lots of supporting patches and changes all over the kernel, back porting that would be a major undertaking and almost certainly introduce quite a few regressions, as the MGLRU code takes assumptions that just ain't true in the 5.15 kernel.
 
  • Like
Reactions: donhwyo
Considering using this over stock 5.15, I checked the compile config, 5.15 is set to preempt voluntary which I think is probably the best preemption mode, but 6.1 has full and both voluntary configured from what I can tell so unsure what its default run mode would be.

I assume adding this to /etc/kernel/cmdline would suffice?

preempt=voluntary
 
Tried enabling MLGRU on one server.
Since then we have one VM (with linux guest) locking up approximatly once in 2 weeks with 100% CPU usage.

I have restarted server now to disable MLGRU so in few weeeks I would know if it was definitly MLGRU problem or not.

Anybody else experienced something like this?
 
Am I able to install this with Proxmox 8 since I happen to have NVMe drives that don't play well with the default kernel?
 
Am I able to install this with Proxmox 8 since I happen to have NVMe drives that don't play well with the default kernel?
Which NVMe drive models do you have, and what are the problems?

Just to be sure, with the "default kernel" you mean the 6.2 kernel, currently the default for Proxmox VE 8?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!