Hey,I do not have any similar setup, but things I'd look into:
1. Have you checked that currently (new kernel) the ASM1166 is in fact on the 04:00:.0 bus?
2. Have you checked that the IOMMU groups are correctly segregated for that device? You can do this with:
as shown in the docs under "General Requirements".Code:pvesh get /nodes/{nodename}/hardware/pci --pci-class-blacklist ""
These things can change.
VM 101 qmp command failed - VM 101 qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries
The fix I was asking about in my reply that RealPjtor was quoting is this one: https://github.com/proxmox/pve-kernel/commit/5b22d6c07de352cbffd35a02570094dc66b9dc6d
From what I can tell, this was cherry-picked after 6.14.5-1 was tagged (looking at this branch: https://github.com/proxmox/pve-kernel/commits/bookworm-6.14/) and is also not mentioned in the changelog for 6.14.8-x
In any case, Uturn has confirmed the issue is resolved which is great to hear and I look forward to this being released to the enterprise repo when it's ready
With PVE 9.0 released today, will we be seeing 6.14.8 or newer be available as opt-in for 8.4 soon? My 8.4 sources are still pointing to 6.14.5-1~bpo12+1
proxmox-kernel-6.14.8-2-bpo12-pve-signed
version 6.14.8-2~bpo12+1
to the enterprise repository today at around 19:15 CEST (CDN sync might have made it show up only a bit after that though)Exactly, this is my problem too. And I provided a link that says it's fixed in kernel 6.16.Like mentioned before, on both kernel 6.8 and 6.11 the setup works just fine.
It appears that kernel 6.14 has issues with PCIe passthrough.
If you start from scratch there will only be kernel version 6.14 installed. But an upgrade to PVE 9 doesn't remove your installed kernels. You still can pin and use kernel 6.11. I don't know if this installation is still supported and you won't get updates for kernel 6.11 any longer , but it's working....if I upgrade to Proxmox 9.0 and still have the PCIe passthrough problem, can I downgrade the kernel to 6.11?
The fix I was asking about in my reply that RealPjtor was quoting is this one: https://github.com/proxmox/pve-kernel/commit/5b22d6c07de352cbffd35a02570094dc66b9dc6d
From what I can tell, this was cherry-picked after 6.14.5-1 was tagged (looking at this branch: https://github.com/proxmox/pve-kernel/commits/bookworm-6.14/) and is also not mentioned in the changelog for 6.14.8-x
apt purge proxmox-kernel-6.14
Please note that you should reboot even if you already used the 6.14 kernel previously, through the opt-in package on Proxmox VE 8.This is required to guarantee the best compatibility with the rest of the system, as the updated kernel was (re-)build with the newer Proxmox VE 9 compiler and ABI versions.
I would also love to see a fix for the PCIe passthrough issue in kernel 6.14.Exactly, this is my problem too. And I provided a link that says it's fixed in kernel 6.16.
So is this fix now backported to 6.14.8-2 part of the 9.0 release? I can't test this myself for at least 3-4 weeks.
Perhaps a stupid question, if I upgrade to Proxmox 9.0 and still have the PCIe passthrough problem, can I downgrade the kernel to 6.11?
/etc/modprobe.d/nvidia.conf
blacklist nvidiafb
blacklist nouveau
#
blacklist nvidia
blacklist nvidia_drm
blacklist nvidia_modeset
#
blacklist nova_core
blacklist nova_drm
Hi there.I would also love to see a fix for the PCIe passthrough issue in kernel 6.14.
If the issue is fixed in kernel 6.16 a backport of the fix would be a good idea I think.
For now I switched back to kernel 6.8 as it is working stable.
Kernel 6.11 seems unmaintained / outdated.
https://bugzilla.proxmox.com/How/where do we report bugs in the 6.8 proxmox kernel. There is a nasty mdraid bug that kernel panics on shutdown or reboot. It is corrected in the 6.14 kernel but they really should fix it in the 6.8 kernel.
./NVIDIA-Linux-x86_64-550.144.02-vgpu-kvm.run --apply-patch ~/vgpu-proxmox/550.144.02.patch
Verifying archive integrity... OK
Uncompressing NVIDIA Accelerated Graphics Driver for Linux-x86_64 550.144.02.............................................................................................................................................................................................................................................................................................................................................................................................................................................................................
/usr/bin/patch: **** patch line 9 contains NUL byte
Failed to apply patch file "/root/vgpu-proxmox/550.144.02.patch".
6.14.11-1-pve
), I encountered amdgpu related issues. My GPU is an AMD RX 6600 XT
.Sep 07 09:36:46 pve kernel: Linux version 6.14.11-1-pve (tom@alp1) (gcc (Debian 14.2.0-19) 14.2.0, GNU ld (GNU Binutils for Debian) 2.44) #1 SMP PREEMPT_DYNAMIC PMX 6.14.11-1 (2025-08-26T16:06Z) ()
Sep 07 09:36:46 pve kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-6.14.11-1-pve root=/dev/mapper/pve-root ro quiet iommu=pt
...
Sep 07 09:37:01 pve kernel: amdgpu 0000:03:00.0: [drm] *ERROR* dc_dmub_srv_log_diagnostic_data: DMCUB error - collecting diagnostic data
Sep 07 09:37:12 pve kernel: amdgpu 0000:03:00.0: [drm] *ERROR* [CRTC:85:crtc-0] flip_done timed out
amdgpu.dcdebugmask=0x10
.Sep 07 12:21:51 pve kernel: Linux version 6.14.11-1-pve (tom@alp1) (gcc (Debian 14.2.0-19) 14.2.0, GNU ld (GNU Binutils for Debian) 2.44) #1 SMP PREEMPT_DYNAMIC PMX 6.14.11-1 (2025-08-26T16:06Z) ()
Sep 07 12:21:51 pve kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-6.14.11-1-pve root=/dev/mapper/pve-root ro quiet iommu=pt
...
Sep 07 12:22:07 pve kernel: [drm:amdgpu_discovery_set_ip_blocks [amdgpu]] *ERROR* amdgpu_discovery_init failed
Sep 07 12:22:07 pve kernel: amdgpu 0000:03:00.0: amdgpu: Fatal error during GPU init
Sep 07 12:22:07 pve kernel: amdgpu 0000:03:00.0: amdgpu: amdgpu: finishing device.
Sep 07 12:22:07 pve kernel: amdgpu 0000:03:00.0: probe with driver amdgpu failed with error -22
6.8.12-14-pve
does not exhibit these problems.We use essential cookies to make this site work, and optional cookies to enhance your experience.