PCI passthrough ERROR since Kernel 5.13.19-1 (upgrade from 7.0 to 7.1)

Yes i did all this. I was running the PCI pass through on the previous version already for a long time.
This appeared since the update
 
I'm having the same issue on Proxmox 7.1 with a J3455 (Apollo Lake) Platform. My IOMMU groups are seperated after patching the kernel and enabling the ACS override function.

[ 2416.585989] DMAR: DRHD: handling fault status reg 2 [ 2416.586008] DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x0 [fault reason 0x02] Present bit in context entry is clear [ 2419.607363] vfio-pci 0000:00:02.0: vfio_ecap_init: hiding ecap 0x1b@0x100 [ 2419.607465] vfio-pci 0000:00:02.0: IGD assignment does not support opregion v2.0 with an extended VBT region [ 2419.607482] vfio-pci 0000:00:02.0: Failed to setup Intel IGD regions [ 2419.917339] DMAR: DRHD: handling fault status reg 2 [ 2419.917356] DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x0 [fault reason 0x02] Present bit in context entry is clear
 
Last edited:
I just downgraded the kernel to 5.11.22-7-pve and can confirm that it is now possible to passthrough the Intel HD 500 iGPU to Ubuntu 21.10 guest VM.

$ apt install pve-kernel-5.11.22-7-pve $ pve-efiboot-tool kernel list $ pve-efiboot-tool kernel add 5.11.22-7-pve $ update-initramfs -u -k all && pve-efiboot-tool refresh


I'm getting a display signal and can use the desktop gui but the guest vm freezes as soons as I start demanding applications like Firefox.

[ 2538.985145] perf: interrupt took too long (3149 > 3142), lowering kernel.perf_event_max_sample_rate to 63500

So it is pretty clear to me that there is something wrong with kernel 5.15.12-1-pve in regards to GVT-d.

There are still errors when starting the Ubuntu 21.10 guest but I'm not sure if is caused by my config.

$ qm start 105 kvm: -device vfio-pci,host=0000:00:02.0,id=hostpci0,bus=pci.0,addr=0x2,romfile=/usr/share/kvm/intel_hd500_j3455_fixed.rom: vfio 0000:00:02.0: failed getting region info for VGA region index 8: Invalid argument kvm: -device vfio-pci,host=0000:00:02.0,id=hostpci0,bus=pci.0,addr=0x2,romfile=/usr/share/kvm/intel_hd500_j3455_fixed.rom: IGD device 0000:00:02.0 failed to enable VGA access, legacy mode disabled $ dmesg -w [ 219.006518] DMAR: DRHD: handling fault status reg 2 [ 219.006540] DMAR: [DMA Write] Request device [00:02.0] PASID ffffffff fault addr 0 [fault reason 02] Present bit in context entry is clear [ 220.758012] vfio-pci 0000:00:02.0: vfio_ecap_init: hiding ecap 0x1b@0x100 [ 220.761879] vfio-pci 0000:00:02.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0xc298 $ qm config 105 boot: order=virtio0 cores: 4 hostpci0: 0000:00:02.0,legacy-igd=1,romfile=intel_hd500_j3455_fixed.rom ide2: none,media=cdrom memory: 4096 meta: creation-qemu=6.1.0,ctime=1643820398 name: ubudesktop-legacy numa: 0 ostype: l26 parent: unmodified scsihw: virtio-scsi-pci sockets: 1 vga: none
 
I just updated to Kernel 5.13.19 and now have the same problem.
I have a H310 passthrough to TrueNAS for a long time now, which worked without problems until update.
When I try to start VM it sais:

Code:
TASK ERROR: Cannot bind 0000:08:00.0 to vfio
Error: unable to read tail (got 0 bytes)

I hope this gets fixed soon.
 
I just updated to Kernel 5.13.19 and now have the same problem.
I have a H310 passthrough to TrueNAS for a long time now, which worked without problems until update.
When I try to start VM it sais:

Code:
TASK ERROR: Cannot bind 0000:08:00.0 to vfio
Error: unable to read tail (got 0 bytes)

I hope this gets fixed soon.

Same problem here. For the time being I have used GRUB bootloader to boot into kernel 5.13.19-3-pve and all is well.
 
I tested also the Kernel Version 5.13.14-1-pve

But also the error after qm start 110

kvm: -device vfio-pci,host=0000:00:02.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio 0000:00:02.0: error getting device from group 1: Invalid argument
Verify all devices in group 1 are bound to vfio-<bus> or pci-stub and not already in use
 
I found the solution to this problem

https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF

to check the PCIe Bridge your graphic card is connected to

root@Explore:~# lspci -t
-[0000:00]-+-00.0
+-00.2
+-01.0
+-01.1-[01]----00.0
+-01.2-[02-27]--+-00.0
| +-00.1
| \-00.2-[03-27]--+-00.0-[04-05]--+-00.0
| | \-00.1
| +-04.0-[06-26]----00.0
| \-08.0-[27]----00.0
+-02.0
+-03.0
+-03.1-[28]--+-00.0
| \-00.1
+-04.0

On my machine my graphic card is on 28 so the PCIe Bridge is 03.1
28:00.0 VGA compatible controller: NVIDIA Corporation Device 2414 (rev a1)
Then execute the command below
echo 1 > /sys/bus/pci/devices/[path to PCIe Bridge]/remove
echo 1 > /sys/bus/pci/rescan
This should fix it
751a9c08294e39bbea921a68d6a1150.jpg
 
Last edited:
I tested your fix, but it doesn't work.

Bash:
root@nas:~# echo 1 > /sys/bus/pci/devices/0000\:00\:02.0/remove
root@nas:~# echo 1 > /sys/bus/pci/rescan
root@nas:~# qm start 110
kvm: -device vfio-pci,host=0000:00:02.0,id=hostpci0,bus=pci.0,addr=0x10: vfio 0000:00:02.0: failed to setup container for group 1: Failed to set iommu for container: Device or resource busy
start failed: QEMU exited with code 1
 
I tested your fix, but it doesn't work.

Bash:
root@nas:~# echo 1 > /sys/bus/pci/devices/0000\:00\:02.0/remove
root@nas:~# echo 1 > /sys/bus/pci/rescan
root@nas:~# qm start 110
kvm: -device vfio-pci,host=0000:00:02.0,id=hostpci0,bus=pci.0,addr=0x10: vfio 0000:00:02.0: failed to setup container for group 1: Failed to set iommu for container: Device or resource busy
start failed: QEMU exited with code 1
can you send the lspci -t result?
 
Bash:
root@nas:~# lspci
00:00.0 Host bridge: Intel Corporation Gemini Lake Host Bridge (rev 03)
00:00.1 Signal processing controller: Intel Corporation Celeron/Pentium Silver Processor Dynamic Platform and Thermal Framework Processor Participant (rev 03)
00:02.0 VGA compatible controller: Intel Corporation GeminiLake [UHD Graphics 600] (rev 03)
00:0e.0 Audio device: Intel Corporation Celeron/Pentium Silver Processor High Definition Audio (rev 03)
00:0f.0 Communication controller: Intel Corporation Celeron/Pentium Silver Processor Trusted Execution Engine Interface (rev 03)
00:12.0 SATA controller: Intel Corporation Celeron/Pentium Silver Processor SATA Controller (rev 03)
00:13.0 PCI bridge: Intel Corporation Gemini Lake PCI Express Root Port (rev f3)
00:13.1 PCI bridge: Intel Corporation Gemini Lake PCI Express Root Port (rev f3)
00:13.2 PCI bridge: Intel Corporation Gemini Lake PCI Express Root Port (rev f3)
00:13.3 PCI bridge: Intel Corporation Gemini Lake PCI Express Root Port (rev f3)
00:15.0 USB controller: Intel Corporation Celeron/Pentium Silver Processor USB 3.0 xHCI Controller (rev 03)
00:1f.0 ISA bridge: Intel Corporation Celeron/Pentium Silver Processor LPC Controller (rev 03)
00:1f.1 SMBus: Intel Corporation Celeron/Pentium Silver Processor Gaussian Mixture Model (rev 03)
01:00.0 PCI bridge: ASMedia Technology Inc. ASM1184e PCIe Switch Port
02:01.0 PCI bridge: ASMedia Technology Inc. ASM1184e PCIe Switch Port
02:03.0 PCI bridge: ASMedia Technology Inc. ASM1184e PCIe Switch Port
02:05.0 PCI bridge: ASMedia Technology Inc. ASM1184e PCIe Switch Port
02:07.0 PCI bridge: ASMedia Technology Inc. ASM1184e PCIe Switch Port
03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 07)
04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 07)
05:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 07)
06:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 07)
08:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 15)
09:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 02)

Bash:
root@nas:~# lspci -t
-[0000:00]-+-00.0
           +-00.1
           +-02.0
           +-0e.0
           +-0f.0
           +-12.0
           +-13.0-[01-06]----00.0-[02-06]--+-01.0-[03]----00.0
           |                               +-03.0-[04]----00.0
           |                               +-05.0-[05]----00.0
           |                               \-07.0-[06]----00.0
           +-13.1-[07]--
           +-13.2-[08]----00.0
           +-13.3-[09]----00.0
           +-15.0
           +-1f.0
           \-1f.1
 
everything is on [0000:00] remove that won't be feasible :(
\Have you tried enable UEFI-CSM in the bios?
 
It should not really be a problem by the BIOS. All Kernel versions till 5.11.22 works fine. There must be a change in the kernel after 5.11.22 that raised that problem with passthrough. I think, it is specially the part of Intel iGPU driver because it seems to me that only some Intel iGPUs (Gemini Lake and Apollo Lake Generation) are affected.
 
Last edited:
6 and 9 are also affected and no, the rescan thing don't work. With a tb unit, it pass directly.. Again, install system 7.1 iso, config and don't do any update to have something that work. Strange that they haven't tested the iso as one of the main point of virtualization is .. passing pci device.
 
update:

root@nas:~# uname -r 5.15.35-2-pve root@nas:~# qm start 110 kvm: -device vfio-pci,host=0000:00:02.0,id=hostpci0,bus=pci.0,addr=0x10: vfio 0000:00:02.0: error getting device from group 1: Invalid argument Verify all devices in group 1 are bound to vfio-<bus> or pci-stub and not already in use start failed: QEMU exited with code 1 root@nas:~#
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!