Opt-in Linux 6.14 Kernel for Proxmox VE 8 available on test & no-subscription

Kernel 6.14.0-2-pve boot proxmox in emergency mode always, but kernel 6.8.12-10-pve is working OK if manually boot to the older one after opt-in to the new one

Hardware:


Motherboard: Fujitsu D3417-B2
CPU: Intel Xeon E3-1275 v6
RAM: 64GB ECC
SSD: NVME 2X512Gb
NIC 1 Gbit: Intel I219-LM
 
Last edited:
FYI, according to 6.14.8-rc1 patches list, « removing of deprecated PCI IDs » patches won’t be included (likely still not sent to stable)
https://lore.kernel.org/stable/20250520125810.535475500@linuxfoundation.org/T/#t
Yes, likely.

I was more referring to cherry-picks for Proxmox's 6.14 test kernels, as has already been done for the other two diffs submitted by AMD

https://git.proxmox.com/?p=pve-kernel.git;a=commit;h=4a6063d2f9565631ad4968517d2b11d3821c1bfe

We'll see. Right now i'm back on 6.11
 
What tool is generating these graphs please?
I'm not OP but I have a similar energy graph for my server. I use Home Assistant OS and have a Kauf brand smart plug in a wall outlet, and HAOS automatically monitors it.

(Next step is to upgrade to a UPS and do it the right way.)

1747965054750.png
 
Last edited:
UPS and do it the right way

Here is my UPS (monitored by HA) that has attached PVE node (mini pc) + router + watchdog router device (ESP32), note the total power-draw:
 

Attachments

  • Screenshot 2025-05-23 112316.png
    Screenshot 2025-05-23 112316.png
    40.4 KB · Views: 25
Kernel 6.11.11-2-pve boots without any issues.
However, VM's do not start in any version of 6.14
Attached is the systemlog.
Thank you.
 

Attachments

1. AMD EPYC 9374F -> 6.14.5-1-bpo12-pve -> No Issues
2. AMD EPYC 9374F -> 6.14.5-1-bpo12-pve -> No Issues
3. AMD Ryzen 7 5800X -> 6.14.4-1-pve -> No Issues (But IOMMU Groups changed numbers, had to adjust)
4. Intel(R) Core(TM) i3-1315U -> 6.14.4-1-pve -> No Issues
5. Intel(R) Core(TM) i3-1115G4 -> 6.14.5-1-bpo12-pve -> No Issues
6. Intel(R) Xeon(R) CPU E3-1275 v5 -> 6.8.12-10-pve -> No Issues
7. Intel(R) Xeon(R) CPU E5-2680 v4 -> 6.8.12-9-pve -> No Issues
8. Intel(R) Xeon(R) CPU E5-2637 v3 -> 6.8.12-10-pve -> No Issues
9. Intel(R) Xeon(R) Silver 4210R -> 6.11.11-2-pve -> No Issues
10. Intel(R) Xeon(R) Silver 4210R -> 6.11.11-2-pve -> No Issues
11. AMD Ryzen 7 PRO 8700GE -> 6.11.11-2-pve -> No Issues

Thats all the Servers im Running right now, with their Kernel versions. I have to Update some of them but i have no issues at all since almost forever.
Hope that helps some.

There is a trend, those servers with newer Kernels are more used, means i update often Productive and Important Servers earlier and more often.
And they usually have a lot of VM's.
Those Servers with older Kernels, i dont update because of the Stone-Old CPU, or because they aren't important and not highly used...

Im updating the Productive Servers earlier and more often, because of Performance reasons.
Cheers

EDIT:
8. Intel(R) Xeon(R) CPU E5-2637 v3 -> Upgraded to 6.14.5-1-bpo12-pve -> No Issues
(On all Servers that support ECC, ECC/Edac works, had no interface name changes etc... No Issues)
 
Last edited:
  • Like
Reactions: t.lamprecht
Kernel 6.11.11-2-pve boots without any issues.
However, VM's do not start in any version of 6.14
Attached is the systemlog.
Thank you.
Is the log file you attached complete? As I did not see any error nor mentioning of important systems services.

Can you regenerate the full log using something like:

journalctl --no-hostname -o short-precise --since=2025-05-24 | zstd >journal.log.zst

I added the since param to limit the size of the log file, ensure to adapt it if you booted the kernel that's causing you issues earlier than today. Alternatively you could swap that parameter with -b-1 to get the full log of the last boot, or -b-2 to get the log from two boots ago and so on, e.g.:
journalctl --no-hostname -o short-precise --b-1 | zstd >journal.log.zst
 
Is the log file you attached complete? As I did not see any error nor mentioning of important systems services.

Can you regenerate the full log using something like:

journalctl --no-hostname -o short-precise --since=2025-05-24 | zstd >journal.log.zst

I added the since param to limit the size of the log file, ensure to adapt it if you booted the kernel that's causing you issues earlier than today. Alternatively you could swap that parameter with -b-1 to get the full log of the last boot, or -b-2 to get the log from two boots ago and so on, e.g.:
journalctl --no-hostname -o short-precise --b-1 | zstd >journal.log.zst
Sure, here it is.
Thank you,
 

Attachments

Sure, here it is.
Thank you,

Thanks, but seems like it just reset, and potentially crashed before that, so sadly no real error message in that log. Did you notice anything on the display, if that host is connected to any? Do you have to manually interfere, like the host seemed unresponsive, and you pressed the reset button or the like?

FWIW; your BIOS is the rather dated version "4402" from "02/03/2023", there were a few releases since then, the latest version "5002" from February 2025:
https://rog.asus.com/motherboards/rog-crosshair/rog-crosshair-viii-dark-hero-model/helpdesk_bios/

It cannot be said for certain, but old BIOS seldomly help with compatibility under newer kernels, so updating that might be a good idea to rule that out as potential cause. If you booted via EFI you could just copy the firmware file (IIRC it's the .CAP one) to the ESP fat partition and then select that in the BIOS menu's update wizard, at least it worked well that way for a slightly older ASUS motherboard I have.
 
Hello,

on kernel 6.14 the VMs that I configured autostart for seem to start (according to proxmox UI) after booting.
Unfortunately console / VNC does not work via UI.
Sending commands to a VM like shutdown, reboot, etc does also not work.
The VMs can not be reached via network / they seem to hang.
When rebooting the proxmox server the system will hang at the systemd step where it is waiting for VMs to shutdown.
It will hang there until the VM gets killed eventually.

I can see a lot of log entries like the following in "journalctl -xe":
Code:
May 26 11:59:43 prox pvestatd[1097]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries

On kernel 6.11 and 6.8 I do not have this issue.
The server is based on Asus PRIME N100I-D D4-CSM mainboard and Intel N100 CPU. Latest BIOS is installed.

A fix for this issue would be greatly appreciated.
Thanks and best regards
 
Last edited:
Hello,

on kernel 6.14 the VMs that I configured autostart for seem to start (according to proxmox UI) after booting.
Unfortunately console / VNC does not work via UI.
Sending commands to a VM like shutdown, reboot, etc does also not work.
The VMs can not be reached via network / they seem to hang.
When rebooting the proxmox server the system will hang at the systemd step where it is waiting for VMs to shutdown.
It will hang there until the VM gets killed eventually.

I can see a lot of log entries like the following in "journalctl -xe":
Code:
May 26 11:59:43 prox pvestatd[1097]: VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries

On kernel 6.11 and 6.8 I do not have this issue.
The server is based on Asus PRIME N100I-D D4-CSM mainboard and Intel N100 CPU. Latest BIOS is installed.

A fix for this issue would be greatly appreciated.
Thanks and best regards
And the host network is working just fine? Not that some interfaces got renamed, which can occasionally happen on (major) kernel updates, if the interface names are not pinned.

Check if ip addr and the configured network interfaces in Node -> Network on the web UI or /etc/network/interfaces config file still match. Also check the system log for any messages looking like a (network/guest related) error.
 
I saw there was an update to the kernel, so I did a test upgrade on another node still on 6.8 and that went well. So I did the same on the node using 6.14 and saw many warnings. I haven't rebooted yet, but wanted to report it here first in case there is something I should do beforehand:

Code:
Preparing to unpack .../0-proxmox-archive-keyring_3.2_all.deb ...
Unpacking proxmox-archive-keyring (3.2) over (3.1) ...
Preparing to unpack .../1-pve-firmware_3.15-4_all.deb ...
Unpacking pve-firmware (3.15-4) over (3.15-3) ...
Selecting previously unselected package proxmox-kernel-6.14.5-1-bpo12-pve-signed.
Preparing to unpack .../2-proxmox-kernel-6.14.5-1-bpo12-pve-signed_6.14.5-1~bpo12+1_amd64.deb ...
Unpacking proxmox-kernel-6.14.5-1-bpo12-pve-signed (6.14.5-1~bpo12+1) ...
Preparing to unpack .../3-proxmox-kernel-6.14_6.14.5-1~bpo12+1_all.deb ...
Unpacking proxmox-kernel-6.14 (6.14.5-1~bpo12+1) over (6.14.0-2) ...
Selecting previously unselected package proxmox-kernel-6.8.12-11-pve-signed.
Preparing to unpack .../4-proxmox-kernel-6.8.12-11-pve-signed_6.8.12-11_amd64.deb ...
Unpacking proxmox-kernel-6.8.12-11-pve-signed (6.8.12-11) ...
Preparing to unpack .../5-proxmox-kernel-6.8_6.8.12-11_all.deb ...
Unpacking proxmox-kernel-6.8 (6.8.12-11) over (6.8.12-10) ...
Preparing to unpack .../6-proxmox-widget-toolkit_4.3.11_all.deb ...
Unpacking proxmox-widget-toolkit (4.3.11) over (4.3.10) ...
Preparing to unpack .../7-pve-i18n_3.4.4_all.deb ...
Unpacking pve-i18n (3.4.4) over (3.4.2) ...
Setting up proxmox-widget-toolkit (4.3.11) ...
Setting up pve-firmware (3.15-4) ...
Setting up proxmox-archive-keyring (3.2) ...
Setting up proxmox-kernel-6.8.12-11-pve-signed (6.8.12-11) ...
Examining /etc/kernel/postinst.d.
run-parts: executing /etc/kernel/postinst.d/dkms 6.8.12-11-pve /boot/vmlinuz-6.8.12-11-pve
dkms: WARNING: Linux headers are missing, which may explain the above failures.
      please install the linux-headers-6.8.12-11-pve package to fix this.
run-parts: executing /etc/kernel/postinst.d/initramfs-tools 6.8.12-11-pve /boot/vmlinuz-6.8.12-11-pve
update-initramfs: Generating /boot/initrd.img-6.8.12-11-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
Copying and configuring kernels on /dev/disk/by-uuid/B7F3-3337
        Copying kernel 6.14.0-2-pve
No initrd-image /boot/initrd.img-6.14.5-1-bpo12-pve found - skipping
        Copying kernel 6.8.12-11-pve
        Removing old version 6.14.0-1-pve
        Removing old version 6.8.12-10-pve
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-6.14.0-2-pve
Found initrd image: /boot/initrd.img-6.14.0-2-pve
Found linux image: /boot/vmlinuz-6.8.12-11-pve
Found initrd image: /boot/initrd.img-6.8.12-11-pve
Adding boot menu entry for UEFI Firmware Settings ...
done
run-parts: executing /etc/kernel/postinst.d/proxmox-auto-removal 6.8.12-11-pve /boot/vmlinuz-6.8.12-11-pve
run-parts: executing /etc/kernel/postinst.d/zz-proxmox-boot 6.8.12-11-pve /boot/vmlinuz-6.8.12-11-pve
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
Copying and configuring kernels on /dev/disk/by-uuid/B7F3-3337
        Copying kernel 6.14.0-2-pve
No initrd-image /boot/initrd.img-6.14.5-1-bpo12-pve found - skipping
        Copying kernel 6.8.12-11-pve
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-6.14.0-2-pve
Found initrd image: /boot/initrd.img-6.14.0-2-pve
Found linux image: /boot/vmlinuz-6.8.12-11-pve
Found initrd image: /boot/initrd.img-6.8.12-11-pve
Adding boot menu entry for UEFI Firmware Settings ...
done
run-parts: executing /etc/kernel/postinst.d/zz-update-grub 6.8.12-11-pve /boot/vmlinuz-6.8.12-11-pve
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-6.14.5-1-bpo12-pve
/usr/sbin/grub-probe: error: unknown filesystem.
Found linux image: /boot/vmlinuz-6.14.0-2-pve
Found initrd image: /boot/initrd.img-6.14.0-2-pve
Found linux image: /boot/vmlinuz-6.14.0-1-pve
Found initrd image: /boot/initrd.img-6.14.0-1-pve
Found linux image: /boot/vmlinuz-6.8.12-11-pve
Found initrd image: /boot/initrd.img-6.8.12-11-pve
Found linux image: /boot/vmlinuz-6.8.12-10-pve
Found initrd image: /boot/initrd.img-6.8.12-10-pve
/usr/sbin/grub-probe: error: unknown filesystem.
Adding boot menu entry for UEFI Firmware Settings ...
done
Setting up pve-i18n (3.4.4) ...
Setting up proxmox-kernel-6.14.5-1-bpo12-pve-signed (6.14.5-1~bpo12+1) ...
Examining /etc/kernel/postinst.d.
run-parts: executing /etc/kernel/postinst.d/dkms 6.14.5-1-bpo12-pve /boot/vmlinuz-6.14.5-1-bpo12-pve
dkms: WARNING: Linux headers are missing, which may explain the above failures.
      please install the linux-headers-6.14.5-1-bpo12-pve package to fix this.
run-parts: executing /etc/kernel/postinst.d/initramfs-tools 6.14.5-1-bpo12-pve /boot/vmlinuz-6.14.5-1-bpo12-pve
update-initramfs: Generating /boot/initrd.img-6.14.5-1-bpo12-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
Copying and configuring kernels on /dev/disk/by-uuid/B7F3-3337
        Copying kernel 6.14.0-2-pve
        Copying kernel 6.14.5-1-bpo12-pve
        Copying kernel 6.8.12-11-pve
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-6.14.5-1-bpo12-pve
Found initrd image: /boot/initrd.img-6.14.5-1-bpo12-pve
Found linux image: /boot/vmlinuz-6.14.0-2-pve
Found initrd image: /boot/initrd.img-6.14.0-2-pve
Found linux image: /boot/vmlinuz-6.8.12-11-pve
Found initrd image: /boot/initrd.img-6.8.12-11-pve
Adding boot menu entry for UEFI Firmware Settings ...
done
run-parts: executing /etc/kernel/postinst.d/proxmox-auto-removal 6.14.5-1-bpo12-pve /boot/vmlinuz-6.14.5-1-bpo12-pve
run-parts: executing /etc/kernel/postinst.d/zz-proxmox-boot 6.14.5-1-bpo12-pve /boot/vmlinuz-6.14.5-1-bpo12-pve
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
Copying and configuring kernels on /dev/disk/by-uuid/B7F3-3337
        Copying kernel 6.14.0-2-pve
        Copying kernel 6.14.5-1-bpo12-pve
        Copying kernel 6.8.12-11-pve
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-6.14.5-1-bpo12-pve
Found initrd image: /boot/initrd.img-6.14.5-1-bpo12-pve
Found linux image: /boot/vmlinuz-6.14.0-2-pve
Found initrd image: /boot/initrd.img-6.14.0-2-pve
Found linux image: /boot/vmlinuz-6.8.12-11-pve
Found initrd image: /boot/initrd.img-6.8.12-11-pve
Adding boot menu entry for UEFI Firmware Settings ...
done
run-parts: executing /etc/kernel/postinst.d/zz-update-grub 6.14.5-1-bpo12-pve /boot/vmlinuz-6.14.5-1-bpo12-pve
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-6.14.5-1-bpo12-pve
Found initrd image: /boot/initrd.img-6.14.5-1-bpo12-pve
/usr/sbin/grub-probe: error: unknown filesystem.
Found linux image: /boot/vmlinuz-6.14.0-2-pve
Found initrd image: /boot/initrd.img-6.14.0-2-pve
Found linux image: /boot/vmlinuz-6.14.0-1-pve
Found initrd image: /boot/initrd.img-6.14.0-1-pve
Found linux image: /boot/vmlinuz-6.8.12-11-pve
Found initrd image: /boot/initrd.img-6.8.12-11-pve
Found linux image: /boot/vmlinuz-6.8.12-10-pve
Found initrd image: /boot/initrd.img-6.8.12-10-pve
/usr/sbin/grub-probe: error: unknown filesystem.
Adding boot menu entry for UEFI Firmware Settings ...
done
Setting up proxmox-kernel-6.14 (6.14.5-1~bpo12+1) ...
Setting up proxmox-kernel-6.8 (6.8.12-11) ...

Your System is up-to-date


Seems you installed a kernel update - Please consider rebooting
this node to activate the new kernel.

starting shell
 
I saw there was an update to the kernel, so I did a test upgrade on another node still on 6.8 and that went well. So I did the same on the node using 6.14 and saw many warnings. I haven't rebooted yet, but wanted to report it here first in case there is something I should do beforehand:

Code:
run-parts: executing /etc/kernel/postinst.d/dkms 6.8.12-11-pve /boot/vmlinuz-6.8.12-11-pve
dkms: WARNING: Linux headers are missing, which may explain the above failures.
      please install the linux-headers-6.8.12-11-pve package to fix this.

...

run-parts: executing /etc/kernel/postinst.d/dkms 6.14.5-1-bpo12-pve /boot/vmlinuz-6.14.5-1-bpo12-pve
dkms: WARNING: Linux headers are missing, which may explain the above failures.
      please install the linux-headers-6.14.5-1-bpo12-pve package to fix this.
Do you have dkms setup for an out-of-tree kernel module (e.g., for a network driver or the like) that you rely on?

In that case it might be indeed good to install the headers packages, ideally the meta packages for 6.8 and 6.14 to ensure you get all updates. The following command in a root shell on the PVE host will install the relevant headers for you, i.e. the one from our current default (which then will pull new ones in even if the default kernel changes) and the one from the 6.14 opt-in series:

apt install proxmox-default-headers proxmox-headers-6.14

Then rerun the DKMS:
dkms autoinstall

Alternatively to the dkms command could be to reinstall the kernel you want to boot into, e.g. apt install --reinstall proxmox-kernel-6.14.5-1-bpo12-pve-signed

If you're certain that you do not rely on DKMS you can also just ignore this. And for what it's worth, if you do and are mistaken you can just reboot into the previous working kernel using the bootloader menu on boot.
 
Do you have dkms setup for an out-of-tree kernel module (e.g., for a network driver or the like) that you rely on?

In that case it might be indeed good to install the headers packages, ideally the meta packages for 6.8 and 6.14 to ensure you get all updates. The following command in a root shell on the PVE host will install the relevant headers for you, i.e. the one from our current default (which then will pull new ones in even if the default kernel changes) and the one from the 6.14 opt-in series:

apt install proxmox-default-headers proxmox-headers-6.14

Then rerun the DKMS:
dkms autoinstall

Alternatively to the dkms command could be to reinstall the kernel you want to boot into, e.g. apt install --reinstall proxmox-kernel-6.14.5-1-bpo12-pve-signed

If you're certain that you do not rely on DKMS you can also just ignore this. And for what it's worth, if you do and are mistaken you can just reboot into the previous working kernel using the bootloader menu on boot.

I recall a long time back having issues with dual port LAN adapter I was using randomly not booting correctly and I would lose connectivity until I rebooted a few times. For a while I was recompiling the Intel driver to exclude a check so that the driver would always load, but then I eventually replaced the card and the problem went away. I think I reverted back to stock drivers and no longer recompile the driver, at least I haven't done that in over a year as I always needed to do it for each kernel update (which I got fed up with doing). It was so long ago I can't even remember how I did it all, so not sure where to check if I've accidentally left any of it still set up.

I just checked and dkms is a command not found so that's not going to help. Do you think I can remove whatever is causing the system to think I need the headers now and if so, what do I have do?

Sorry for the noob questions, I originally tried the 6.14 kernel to see if it helped with some storage issues, but that turned out to be the drives themselves as since replacing them the problem went away. I've only kept to 6.14 as I'm happy to test as also having issues with my Fedora 41/42 VMs not really liking 6.14 and maybe between them I can get that fixed as well (I've mentioned this in earlier posts).
 
Anyone have any details on what this bpo12 release is all about? The changelog doesn't help me.

Code:
# apt-get changelog proxmox-kernel-6.14.5-1-bpo12-pve-signed
Get:1 https://metadata.cdn.proxmox.com proxmox-kernel-signed-6.14 6.14.5+1~bpo12+1 Changelog [36.7 kB]
Fetched 36.7 kB in 0s (119 kB/s)     
proxmox-kernel-signed-6.14 (6.14.5+1~bpo12+1) bookworm; urgency=medium

  * update sources to Ubuntu-6.14.0-22.22 based on upstream stable 6.14.5.

  * ship modules in canonical /usr/lib path, avoiding the aliased /lib one to
    better follow usrmerge.

 -- Proxmox Support Team <support@proxmox.com>  Wed, 21 May 2025 17:55:32 +0200
 
Anyone have any details on what this bpo12 release is all about? The changelog doesn't help me.

Code:
# apt-get changelog proxmox-kernel-6.14.5-1-bpo12-pve-signed
Get:1 https://metadata.cdn.proxmox.com proxmox-kernel-signed-6.14 6.14.5+1~bpo12+1 Changelog [36.7 kB]
Fetched 36.7 kB in 0s (119 kB/s)   
proxmox-kernel-signed-6.14 (6.14.5+1~bpo12+1) bookworm; urgency=medium

  * update sources to Ubuntu-6.14.0-22.22 based on upstream stable 6.14.5.

  * ship modules in canonical /usr/lib path, avoiding the aliased /lib one to
    better follow usrmerge.

 -- Proxmox Support Team <support@proxmox.com>  Wed, 21 May 2025 17:55:32 +0200
Whatever this kernel release is about, it is busted beyond belief.
I get CPU usage up, with IO delay and server load up the roof.
bpo12.png
Cannot start linux VMs (TASK ERROR: timeout waiting on systemd) and cannot shutdown neither VMs not CTs through PVE interface.
After pinning the previous kernel, I couldn't soft reboot the server. Had to cold reset.


HP DL380p Gen9
 
Last edited:
Anyone have any details on what this bpo12 release is all about? The changelog doesn't help me.

Code:
# apt-get changelog proxmox-kernel-6.14.5-1-bpo12-pve-signed
Get:1 https://metadata.cdn.proxmox.com proxmox-kernel-signed-6.14 6.14.5+1~bpo12+1 Changelog [36.7 kB]
Fetched 36.7 kB in 0s (119 kB/s)   
proxmox-kernel-signed-6.14 (6.14.5+1~bpo12+1) bookworm; urgency=medium

  * update sources to Ubuntu-6.14.0-22.22 based on upstream stable 6.14.5.

  * ship modules in canonical /usr/lib path, avoiding the aliased /lib one to
    better follow usrmerge.

 -- Proxmox Support Team <support@proxmox.com>  Wed, 21 May 2025 17:55:32 +0200
It's a standard kernel update, there are thousands of changes from the previous stable release up to 6.14.5 mentioned here, far to many to list them all and still be useful to anybody.
What are you interested specifically? The bpo12 part is just referring to backport for the twelfth Debian release (bookworm) on which PVE 8 is basing off. The reason for it appearing now is that we're in preparation for PVE 9 since a while, and now want to start to ensure there can be a correct upgrade path, thus build the 6.14 kernel with a versioning scheme that ensures that.
 
Last edited: