Opt-in Linux 6.14 Kernel for Proxmox VE 8 available on test & no-subscription

I do not have any similar setup, but things I'd look into:

1. Have you checked that currently (new kernel) the ASM1166 is in fact on the 04:00:.0 bus?
2. Have you checked that the IOMMU groups are correctly segregated for that device? You can do this with:
Code:
pvesh get /nodes/{nodename}/hardware/pci --pci-class-blacklist ""
as shown in the docs under "General Requirements".

These things can change.
Hey,
thanks for your reply.

1. Yes, I just checked and in fact ASM1166 is always on the 04:00:.0 bus on kernel 6.8, 6.11 and 6.14
2. Yes, ASM1166 is always in it's own IOMMU group for all aforementioned kernel versions

Changing kernel version seems to have no effect on point 1 and 2 in my setup.

Despite this I still have the problem that with kernel 6.14 my TrueNAS VM that has PCIe passthrough configured for ASM1166 SATA controller can not be started.

I can see this log message getting "spammed" to journalctl repeatedly:
VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries


Like mentioned before, on both kernel 6.8 and 6.11 the setup works just fine.

It appears that kernel 6.14 has issues with PCIe passthrough.
 
  • Like
Reactions: mubs
The fix I was asking about in my reply that RealPjtor was quoting is this one: https://github.com/proxmox/pve-kernel/commit/5b22d6c07de352cbffd35a02570094dc66b9dc6d

From what I can tell, this was cherry-picked after 6.14.5-1 was tagged (looking at this branch: https://github.com/proxmox/pve-kernel/commits/bookworm-6.14/) and is also not mentioned in the changelog for 6.14.8-x

In any case, Uturn has confirmed the issue is resolved which is great to hear and I look forward to this being released to the enterprise repo when it's ready

With PVE 9.0 released today, will we be seeing 6.14.8 or newer be available as opt-in for 8.4 soon? My 8.4 sources are still pointing to 6.14.5-1~bpo12+1
 
With PVE 9.0 released today, will we be seeing 6.14.8 or newer be available as opt-in for 8.4 soon? My 8.4 sources are still pointing to 6.14.5-1~bpo12+1

We actually moved proxmox-kernel-6.14.8-2-bpo12-pve-signed version 6.14.8-2~bpo12+1 to the enterprise repository today at around 19:15 CEST (CDN sync might have made it show up only a bit after that though)
 
  • Like
Reactions: smalltrex
Like mentioned before, on both kernel 6.8 and 6.11 the setup works just fine.

It appears that kernel 6.14 has issues with PCIe passthrough.
Exactly, this is my problem too. And I provided a link that says it's fixed in kernel 6.16.

So is this fix now backported to 6.14.8-2 part of the 9.0 release? I can't test this myself for at least 3-4 weeks.

Perhaps a stupid question, if I upgrade to Proxmox 9.0 and still have the PCIe passthrough problem, can I downgrade the kernel to 6.11?
 
...if I upgrade to Proxmox 9.0 and still have the PCIe passthrough problem, can I downgrade the kernel to 6.11?
If you start from scratch there will only be kernel version 6.14 installed. But an upgrade to PVE 9 doesn't remove your installed kernels. You still can pin and use kernel 6.11. I don't know if this installation is still supported and you won't get updates for kernel 6.11 any longer , but it's working.
 
  • Like
Reactions: RealPjotr
The fix I was asking about in my reply that RealPjtor was quoting is this one: https://github.com/proxmox/pve-kernel/commit/5b22d6c07de352cbffd35a02570094dc66b9dc6d

From what I can tell, this was cherry-picked after 6.14.5-1 was tagged (looking at this branch: https://github.com/proxmox/pve-kernel/commits/bookworm-6.14/) and is also not mentioned in the changelog for 6.14.8-x

Are you referring to this patch?

https://github.com/proxmox/pve-kern...engine-ae4dma-Remove-deprecated-PCI-IDs.patch

If the patch has been applied on the Proxmox VE side, you can check it at the following URL.
It's a good idea to check the update date as well.

https://github.com/proxmox/pve-kernel/tree/master/patches/kernel

If you cannot find what you are looking for, please check below.

https://launchpad.net/ubuntu/+source/linux

To find out which ubuntu-kernel it is, check the update sources of the submodules.

https://github.com/proxmox/pve-kernel/tree/master/submodules
 
Last edited:
Does this optional Kernel + package need to be removed prior to upgrading to PVE 9.0 ?

And would you just need to run something like
apt purge proxmox-kernel-6.14

Update: Found this on the upgrade guide, but still not clear if the package needs removing, or that is doesn't matter, since the PVE 9.0 bundled kernel will superceed it

https://pve.proxmox.com/wiki/Upgrade_from_8_to_9
Please note that you should reboot even if you already used the 6.14 kernel previously, through the opt-in package on Proxmox VE 8.This is required to guarantee the best compatibility with the rest of the system, as the updated kernel was (re-)build with the newer Proxmox VE 9 compiler and ABI versions.
 
Last edited:
Exactly, this is my problem too. And I provided a link that says it's fixed in kernel 6.16.

So is this fix now backported to 6.14.8-2 part of the 9.0 release? I can't test this myself for at least 3-4 weeks.

Perhaps a stupid question, if I upgrade to Proxmox 9.0 and still have the PCIe passthrough problem, can I downgrade the kernel to 6.11?
I would also love to see a fix for the PCIe passthrough issue in kernel 6.14.

If the issue is fixed in kernel 6.16 a backport of the fix would be a good idea I think.

For now I switched back to kernel 6.8 as it is working stable.
Kernel 6.11 seems unmaintained / outdated.
 
I would also love to see a fix for the PCIe passthrough issue in kernel 6.14.

If the issue is fixed in kernel 6.16 a backport of the fix would be a good idea I think.

For now I switched back to kernel 6.8 as it is working stable.
Kernel 6.11 seems unmaintained / outdated.
Hi there.
I have the same question: how to remove the opt-in 6.14 kernel from Proxmox 8 once upgraded to Proxmox 9? Right now I can see that 6.14.8-2-bpo12-pve and 6.14.8-2-pve are both installed, the former being initially installed from Proxmox 8.
 
Run "dpkg -l |grep proxmox-kernel".
Look up the exact name of the kernel you want to remove.
Afterwards "apt --purge remove proxmox-kernel-[version]", where "[version]" matches the versioning name, that you previously looked up. Real easy.
 
Last edited:
  • Like
Reactions: sylar12 and waltar
How/where do we report bugs in the 6.8 proxmox kernel. There is a nasty mdraid bug that kernel panics on shutdown or reboot. It is corrected in the 6.14 kernel but they really should fix it in the 6.8 kernel.
 

Hi @GreenDamTan

I managed to patch the 16.9 driver OK, but the 17.5 driver failed to patch:

./NVIDIA-Linux-x86_64-550.144.02-vgpu-kvm.run --apply-patch ~/vgpu-proxmox/550.144.02.patch
Verifying archive integrity... OK
Uncompressing NVIDIA Accelerated Graphics Driver for Linux-x86_64 550.144.02.............................................................................................................................................................................................................................................................................................................................................................................................................................................................................
/usr/bin/patch: **** patch line 9 contains NUL byte
Failed to apply patch file "/root/vgpu-proxmox/550.144.02.patch".

Any idea why this may be?

Also, I'm hoping that it will be possible to patch newer drivers so that they will be able to fully drive a Tesla P4 (Pascal) under kernel 6.14. I think one of the issues with the newer drivers is that mdev is deprecated and the P4 is not SR-IOV capable.
 
After upgrading to PVE 9.0.6 (kernel 6.14.11-1-pve), I encountered amdgpu related issues. My GPU is an AMD RX 6600 XT.

First:
PHP:
Sep 07 09:36:46 pve kernel: Linux version 6.14.11-1-pve (tom@alp1) (gcc (Debian 14.2.0-19) 14.2.0, GNU ld (GNU Binutils for Debian) 2.44) #1 SMP PREEMPT_DYNAMIC PMX 6.14.11-1 (2025-08-26T16:06Z) ()
Sep 07 09:36:46 pve kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-6.14.11-1-pve root=/dev/mapper/pve-root ro quiet iommu=pt
...
Sep 07 09:37:01 pve kernel: amdgpu 0000:03:00.0: [drm] *ERROR* dc_dmub_srv_log_diagnostic_data: DMCUB error - collecting diagnostic data
Sep 07 09:37:12 pve kernel: amdgpu 0000:03:00.0: [drm] *ERROR* [CRTC:85:crtc-0] flip_done timed out
According to the help here, I solved it by adding amdgpu.dcdebugmask=0x10.

Second:
PHP:
Sep 07 12:21:51 pve kernel: Linux version 6.14.11-1-pve (tom@alp1) (gcc (Debian 14.2.0-19) 14.2.0, GNU ld (GNU Binutils for Debian) 2.44) #1 SMP PREEMPT_DYNAMIC PMX 6.14.11-1 (2025-08-26T16:06Z) ()
Sep 07 12:21:51 pve kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-6.14.11-1-pve root=/dev/mapper/pve-root ro quiet iommu=pt
...
Sep 07 12:22:07 pve kernel: [drm:amdgpu_discovery_set_ip_blocks [amdgpu]] *ERROR* amdgpu_discovery_init failed
Sep 07 12:22:07 pve kernel: amdgpu 0000:03:00.0: amdgpu: Fatal error during GPU init
Sep 07 12:22:07 pve kernel: amdgpu 0000:03:00.0: amdgpu: amdgpu: finishing device.
Sep 07 12:22:07 pve kernel: amdgpu 0000:03:00.0: probe with driver amdgpu failed with error -22
This seems to be a kernel bug, Related issues.

Unfortunately, the above issues will cause a green screen freeze, whereas 6.8.12-14-pve does not exhibit these problems.