Opt-in Linux 6.11 Kernel for Proxmox VE 8 available on test & no-subscription

Why do install a package for mpt3sas - that module is in the kernel - are there any issues in kernel 6.11 without that package?
Trying to understand if there's some shortcomings of the in-tree module

Thanks!
That's a good call out, I think I've just gotten into the habit of installing drivers and matching firmware when they are released: https://www.broadcom.com/products/storage/host-bus-adapters/sas-nvme-9500-16i

Having said that, I actually just blacklist the driver as the HBA card is PCI passed through to a TrueNAS Proxmox VM, which works fine, albeit with an older version of the driver.
 
No, it seems to be defaulting to a very high resolution. If we boot up a system without a monitor attached it is much higher than 1920x1080.
my R420 server does not have a monitor pluged, i do everything from virtual console on my home lab,

Try to change the resolution (1024x768) on the line to a higher one, try with 1440x900 for the R740.


***Also check if you can do something about the resolution on iDRAC Settings → Virtual Console
 
Last edited:
Got my hands on a Fujitsu Primergy RX2540 M5 with an Intel Xeon Silver 4215R CPU. Working fine in our Test Cluster
with proxmox-kernel-6.11.11-2 for nearly 2 Days now.
 
Here is my contribution :) :
# uptime
21:13:04 up 66 days, 8:05, 1 user, load average: 0.31, 0.34, 0.27
# uname -a
Linux hiv-spare 6.11.11-1-pve #1 SMP PREEMPT_DYNAMIC PMX 6.11.11-1 (2025-01-17T15:44Z) x86_64 GNU/Linux

Nothing to see as problems at the moment.

On the other 3 servers I use glusterfs, has anyone tested with proxmox-kernel-6.11.11-2-pve, I think with version proxmox-kernel-6.11.11-1-pve someone had mentioned problems.
 
Ran a Dell R740XD on Kernel 6.11-11-1 / 2 without any issues since release (both uptimes were around 60 days), so far it seems Kernel 6.14.0-1 works without an issue on my machine.
Running a pretty standard set-up, ZFS rpool / data pool on NVME disks.
 
Ran a Dell R740XD on Kernel 6.11-11-1 / 2 without any issues since release (both uptimes were around 60 days), so far it seems Kernel 6.14.0-1 works without an issue on my machine.
Running a pretty standard set-up, ZFS rpool / data pool on NVME disks.
Update, it does seem like running kernel 6.14.0-1 controls CPU frequency scaling a bit less "aggressive". My cores seem to keep their frequency boost higher than when I was running kernel 6.11.11-2.
I am running the latest intel-microcode package.

Running the older kernel (6.11.11-2) It regularly boosts up and down between 2.75GHz & 3.70GHz, whilst running kernel 6.14.0-1 results in the cores barely falling below 3.50GHz. Nothing that directly impacts operations or performance (might even say it increases performance) but my CPU does run a bit hotter (~10-15C higher than before).

I verified that I'm running the latest pve-qemu-kvm version (9.2.0-3) that patched the HPET problem.

Attached are some screenshots.
 

Attachments

  • cpu-temp.png
    cpu-temp.png
    141.7 KB · Views: 9
  • cpu-freq.png
    cpu-freq.png
    302.8 KB · Views: 9
my R420 server does not have a monitor pluged, i do everything from virtual console on my home lab,

Try to change the resolution (1024x768) on the line to a higher one, try with 1440x900 for the R740.


***Also check if you can do something about the resolution on iDRAC Settings → Virtual Console

I didn't find any settings in iDRAC that would allow me to lock the video output to a particular resolution or mode.

Spent some time today poking around and found some bits and pieces of info that led me to a solution. I needed to add "nomodeset" to the /etc/kernel/cmdline file, then run "pve-efiboot-tool refresh" as root.

So far this has fixed my iDRAC video resolution for 6.11 kernels on Dell PowerEdge R730 and R750xs systems. I'll have an update on R740 later today.

Edit: This fixed the problems on my R740 systems as well.

All of my systems are configured for UEFI boot.
 
Last edited: