iGPU Passthrough

crazywolf13

New Member
Oct 15, 2023
29
3
3
Hi
I'd like to pass through my iGPU, UHD 530 to a windows-vm, but the gpu does not show up in windows and I get this error:
```
[86811.580310] e1000e 0000:00:1f.6 eno1: left promiscuous mode
[86825.084932] tap117i0: entered promiscuous mode
[86825.135934] vmbr0: port 2(fwpr117p0) entered blocking state
[86825.135939] vmbr0: port 2(fwpr117p0) entered disabled state
[86825.135953] fwpr117p0: entered allmulticast mode
[86825.135990] fwpr117p0: entered promiscuous mode
[86825.136193] e1000e 0000:00:1f.6 eno1: entered promiscuous mode
[86825.148529] vmbr0: port 2(fwpr117p0) entered blocking state
[86825.148534] vmbr0: port 2(fwpr117p0) entered forwarding state
[86825.166759] fwbr117i0: port 1(fwln117i0) entered blocking state
[86825.166764] fwbr117i0: port 1(fwln117i0) entered disabled state
[86825.166774] fwln117i0: entered allmulticast mode
[86825.166809] fwln117i0: entered promiscuous mode
[86825.166837] fwbr117i0: port 1(fwln117i0) entered blocking state
[86825.166839] fwbr117i0: port 1(fwln117i0) entered forwarding state
[86825.176158] fwbr117i0: port 2(tap117i0) entered blocking state
[86825.176163] fwbr117i0: port 2(tap117i0) entered disabled state
[86825.176173] tap117i0: entered allmulticast mode
[86825.176224] fwbr117i0: port 2(tap117i0) entered blocking state
[86825.176226] fwbr117i0: port 2(tap117i0) entered forwarding state
[86825.234861] DMAR: DRHD: handling fault status reg 2
[86825.234865] DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x0 [fault reason 0x02] Present bit in context entry is clear
[86827.356785] vfio-pci 0000:00:02.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0xffff
```

I followed this guide:
https://3os.org/infrastructure/prox...oxmox-configuration-for-igpu-full-passthrough

dmesg ioomu output:

```
root@lenovo3:~# dmesg | grep -e DMAR -e IOMMU
[ 0.000000] Warning: PCIe ACS overrides enabled; This may allow non-IOMMU protected peer-to-peer DMA
[ 0.008411] ACPI: DMAR 0x00000000BC468750 0000A8 (v01 LENOVO TC-FW 00001B10 INTL 00000001)
[ 0.008438] ACPI: Reserving DMAR table memory at [mem 0xbc468750-0xbc4687f7]
[ 0.051646] DMAR: IOMMU enabled
[ 0.146734] DMAR: Host address width 39
[ 0.146735] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[ 0.146745] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 7e3ff0505e
[ 0.146748] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[ 0.146751] DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da
[ 0.146752] DMAR: RMRR base: 0x000000bc184000 end: 0x000000bc1a3fff
[ 0.146754] DMAR: RMRR base: 0x000000bd800000 end: 0x000000bfffffff
[ 0.146756] DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1
[ 0.146757] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[ 0.146758] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[ 0.148370] DMAR-IR: Enabled IRQ remapping in x2apic mode
[ 0.397165] DMAR: No ATSR found
[ 0.397166] DMAR: No SATC found
[ 0.397167] DMAR: IOMMU feature fl1gp_support inconsistent
[ 0.397168] DMAR: IOMMU feature pgsel_inv inconsistent
[ 0.397169] DMAR: IOMMU feature nwfs inconsistent
[ 0.397170] DMAR: IOMMU feature eafs inconsistent
[ 0.397171] DMAR: IOMMU feature prs inconsistent
[ 0.397171] DMAR: IOMMU feature nest inconsistent
[ 0.397172] DMAR: IOMMU feature mts inconsistent
[ 0.397173] DMAR: IOMMU feature sc_support inconsistent
[ 0.397174] DMAR: IOMMU feature dev_iotlb_support inconsistent
[ 0.397175] DMAR: dmar0: Using Queued invalidation
[ 0.397178] DMAR: dmar1: Using Queued invalidation
[ 0.397535] DMAR: Intel(R) Virtualization Technology for Directed I/O
[ 40.883767] DMAR: DRHD: handling fault status reg 2
[ 40.883772] DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x0 [fault reason 0x02] Present bit in context entry is clear
[ 119.642965] DMAR: DRHD: handling fault status reg 2
[ 119.642970] DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x0 [fault reason 0x02] Present bit in context entry is clear
[ 394.387258] DMAR: DRHD: handling fault status reg 2
[ 394.387262] DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x0 [fault reason 0x02] Present bit in context entry is clear
```
 
The internet is full of similar issues sadly. I also just pissed away my weekend trying to get iGPU passthrough working without any luck. It's not working neither for Windows guest, nor for Linux guest.

There is this line in your log:
"Invalid PCI ROM header signature: expecting 0xaa55, got 0xffff"

I have a similar one, and if I understand it correctly this 100% prevents it from working, because it means that for some reason the host cannot present a correct VROM to the guest system; the PCI ROM header signature is incorrect. (In my logs I found messages about shadowed ROMs, and I think managed to dump and check that content, which seemed to confirm that that ROM is somehow bogus.)

If you had a correct ROM file, you could set a romfile= parameter in the VM configuration, but I couldn't dump the ROM, and from what I read it's not possibly with an iGPU for some reason (possibly connected to why the shadowed ROM is corrupt?). There is this github repo with ROMs, but this is some seemingly convoluted solution that requires a lot of changes and uses legacy methods:
https://github.com/gangqizai/igd

There are a lot of guides and suggestions, but my personal impression is that sadly there are very few if any authorative voices who have a thorough understanding and could dependably fix these problems.

This blog post suggests that there are issues with some kernel versions, and I tried to use the one suggested, but it didn't help:
https://www.derekseaman.com/2023/11...u-vt-d-passthrough-with-intel-alder-lake.html
 
Last edited:
  • Like
Reactions: user127
Thanks for your addition.

That linked ROM-repo seems to be a bit too much chinese for me :/

Yeah I've come to the same conclusion, sadly there are way too many different perspectives and nearly every wiki told me compledtely different setup steps.


If setting up igpu passthrough involves so many things that could possibly break something -> kernel downgrading and blacklisting stuff, for me then it's barely worth the hassle and I just well live with the poor performance, a bit saad but yeah.
 
  • Like
Reactions: totalizator

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!