[solved] iGPU passthrough trouble - cannot access '/dev/dri/': No such file or directory

arsaboo

Member
Mar 3, 2022
7
1
8
44
I have an Intel NUC with a built-in HD Graphics 530. I have tried hard, but could not get GPU passthrough to work. I have tried to provide as much detail as possible and will be happy to provide any other details that may help:

Code:
root@jupiter:~# cat /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 94
model name      : Intel(R) Core(TM) i5-6600T CPU @ 2.70GHz
stepping        : 3
microcode       : 0xec
cpu MHz         : 2700.000
cache size      : 6144 KB
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 4
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 22
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
vmx flags       : vnmi preemption_timer invvpid ept_x_only ept_ad ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple shadow_vmcs pml
bugs            : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa itlb_multihit srbds
bogomips        : 5399.81
clflush size    : 64
cache_alignment : 64
address sizes   : 39 bits physical, 48 bits virtual
power management:

The weird thing is that /dev/dri` does not even show up in the host even though lspci shows the GPU:

Code:
root@jupiter:~# ls /dev/dri

ls: cannot access '/dev/dri': No such file or directory



root@jupiter:~# lspci

00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor Host Bridge/DRAM Registers (rev 07)
00:02.0 VGA compatible controller: Intel Corporation HD Graphics 530 (rev 06)
00:14.0 USB controller: Intel Corporation 200 Series/Z370 Chipset Family USB 3.0 xHCI Controller
00:14.2 Signal processing controller: Intel Corporation 200 Series PCH Thermal Subsystem
00:15.0 Signal processing controller: Intel Corporation 200 Series PCH Serial IO I2C Controller #0
00:16.0 Communication controller: Intel Corporation 200 Series PCH CSME HECI #1
00:16.3 Serial controller: Intel Corporation Device a2bd
00:17.0 SATA controller: Intel Corporation 200 Series PCH SATA controller [AHCI mode]
00:1b.0 PCI bridge: Intel Corporation 200 Series PCH PCI Express Root Port #17 (rev f0)
00:1c.0 PCI bridge: Intel Corporation 200 Series PCH PCI Express Root Port #8 (rev f0)
00:1f.0 ISA bridge: Intel Corporation 200 Series PCH LPC Controller (Q270)
00:1f.2 Memory controller: Intel Corporation 200 Series/Z370 Chipset Family Power Management Controller
00:1f.3 Audio device: Intel Corporation 200 Series PCH HD Audio
00:1f.4 SMBus: Intel Corporation 200 Series/Z370 Chipset Family SMBus Controller
00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (5) I219-LM
01:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983
02:00.0 Network controller: Intel Corporation Wireless 8265 / 8275 (rev 78)

root@jupiter:~# dmesg | grep -i -e DMAR -e IOMMU
[    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-5.15.30-2-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on video=efifb:off
[    0.010003] ACPI: DMAR 0x00000000CAE41240 0000CC (v01 INTEL  SKL      00000001 INTL 00000001)
[    0.010035] ACPI: Reserving DMAR table memory at [mem 0xcae41240-0xcae4130b]
[    0.054862] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-5.15.30-2-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on video=efifb:off
[    0.054916] DMAR: IOMMU enabled
[    0.153358] DMAR: Host address width 39
[    0.153359] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[    0.153364] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 7e3ff0505e
[    0.153367] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[    0.153370] DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da
[    0.153372] DMAR: RMRR base: 0x000000cac4f000 end: 0x000000cac6efff
[    0.153374] DMAR: RMRR base: 0x000000cd800000 end: 0x000000cfffffff
[    0.153375] DMAR: ANDD device: 1 name: \_SB.PCI0.I2C0
[    0.153377] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 1
[    0.153378] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[    0.153379] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.154939] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    0.330013] iommu: Default domain type: Translated
[    0.330013] iommu: DMA domain TLB invalidation policy: lazy mode
[    0.369156] DMAR: ACPI device "device:77" under DMAR at fed91000 as 00:15.0
[    0.369164] DMAR: No ATSR found
[    0.369164] DMAR: No SATC found
[    0.369166] DMAR: IOMMU feature fl1gp_support inconsistent
[    0.369167] DMAR: IOMMU feature pgsel_inv inconsistent
[    0.369168] DMAR: IOMMU feature nwfs inconsistent
[    0.369168] DMAR: IOMMU feature eafs inconsistent
[    0.369169] DMAR: IOMMU feature prs inconsistent
[    0.369170] DMAR: IOMMU feature nest inconsistent
[    0.369170] DMAR: IOMMU feature mts inconsistent
[    0.369171] DMAR: IOMMU feature sc_support inconsistent
[    0.369171] DMAR: IOMMU feature dev_iotlb_support inconsistent
[    0.369172] DMAR: dmar0: Using Queued invalidation
[    0.369175] DMAR: dmar1: Using Queued invalidation
[    0.369418] pci 0000:00:00.0: Adding to iommu group 0
[    0.369428] pci 0000:00:02.0: Adding to iommu group 1
[    0.369440] pci 0000:00:14.0: Adding to iommu group 2
[    0.369446] pci 0000:00:14.2: Adding to iommu group 2
[    0.369455] pci 0000:00:15.0: Adding to iommu group 3
[    0.369467] pci 0000:00:16.0: Adding to iommu group 4
[    0.369473] pci 0000:00:16.3: Adding to iommu group 4
[    0.369480] pci 0000:00:17.0: Adding to iommu group 5
[    0.369489] pci 0000:00:1b.0: Adding to iommu group 6
[    0.369498] pci 0000:00:1c.0: Adding to iommu group 7
[    0.369516] pci 0000:00:1f.0: Adding to iommu group 8
[    0.369522] pci 0000:00:1f.2: Adding to iommu group 8
[    0.369529] pci 0000:00:1f.3: Adding to iommu group 8
[    0.369538] pci 0000:00:1f.4: Adding to iommu group 8
[    0.369545] pci 0000:00:1f.6: Adding to iommu group 8
[    0.369553] pci 0000:01:00.0: Adding to iommu group 9
[    0.369562] pci 0000:02:00.0: Adding to iommu group 10
[    0.370661] DMAR: Intel(R) Virtualization Technology for Directed I/O
 
Last edited:
is the i915 module loaded ? 'modprobe i915' ?
 
Ahh....that was it. I think I had to remove the config changes I made to enable passthrough to VM. Those changes were preventing the iGPU from being accessible inside the host. Thanks!
 
  • Like
Reactions: dcsapak
Ahh....that was it. I think I had to remove the config changes I made to enable passthrough to VM. Those changes were preventing the iGPU from being accessible inside the host. Thanks!
Can you please elaborate what you did to get it working? i'm in the same boat..
 
Can you please elaborate what you did to get it working? i'm in the same boat..
I think you would just need to reverse the changes described here: https://pve.proxmox.com/wiki/Pci_passthrough#Enable_the_IOMMU
From what I've read it's enabling IOMMU in particular that is causing /dev/dri to disappear.
It's kind of a bummer though, as it means that you can't run a Proxmox host that passes through PCI devices to a VM while also using hardware acceleration on the host itself.
Though if anyone knows of a way to accomplish it I would be very interested as well as I'm also in the same increasingly crowded boat.
 
Last edited:
  • Like
Reactions: tasmandevil
I think you would just need to reverse the changes described here: https://pve.proxmox.com/wiki/Pci_passthrough#Enable_the_IOMMU
From what I've read it's enabling IOMMU in particular that is causing /dev/dri to disappear.
It's kind of a bummer though, as it means that you can't run a Proxmox host that passes through PCI devices to a VM while also using hardware acceleration on the host itself.
Though if anyone knows of a way to accomplish it I would be very interested as well as I'm also in the same increasingly crowded boat.
IOMMU should not interfere with that, but early binding the device to vfio-pci or blacklisting the device driver (which is often done in combination with IOMMU) can cause this.
Just don't blacklist the drivers (like some people do) that you need for the host or containers, only early bind the devices you want to PCI(e) passthrough (and make sure vfio-pci loads first) and you should be fine. If this is not the case, please let me know.
 
@leesteken how would you easily bind without black listing the drivers? I don’t recall seeing any guides that do that.
You do this by putting this in a file (which name ends with .conf) in the directory /etc/modprobe.d/:
options vfio_pci ids=XYZW:ABCD # Bind the device(s) to vfio_pci instead of the kernel driver. softdep KERNEL_DRIVER pre: vfio_pci # Make sure that vfio_pci loads before the kernel driver.
You can find out the XYZW:ABCD numeric ID (which is different from the PCI ID and applies to all devices with the same ID) and the KERNEL_DRIVER with lspci -nnk.
Don't forget to run update-initramfs -u and possibly other steps that are in the Proxmox manual.
 
Last edited:
  • Like
Reactions: tasmandevil
Thanks for posting a lot of details here. I just became unstuck and was able to enable GPU passthru to my plex LXC container. Thought I'd share how it finally worked.

The Wiki link above seems to have had the steps removed from the article.

My steps:
1. Remove iommu references from grub bootloader per:
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough

2. Update grub with 'update-grub' per:
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysboot_edit_kernel_cmdline

3. reboot

4. check my container, it has access to gpu. Profit $$$
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!