iGPU Passthrough Intel J4105 (Gemini Lake) UHD Graphics 600

DaCHack

Member
Dec 2, 2021
10
1
8
35
Hi,

I am struggling to use iGPU passthrough with display output in a Debian VM on Proxmox 8.1.3 with Kernel 6.5.11-7-pve.

I followed these steps, plus adding the audio modules to the blacklist on the host to be later able to also passthrough audio:
https://3os.org/infrastructure/proxmox/gpu-passthrough/igpu-passthrough-to-vm/

At boot the display output of the host stops when loading the initramdisk (to be expected as I see it). The display simply freezes.
When starting the VM, the display (or at least the display output) turns off and I do not see a console of the guest independently of whether I run the iGPU as "primary GPU" for the VM or not.

Output of dmesg | grep -e DMAR -e IOMMU indicates some errors at boot but googeling them indicates that they might not be the root cause of the issue:
Bash:
[    0.000000] Warning: PCIe ACS overrides enabled; This may allow non-IOMMU protected peer-to-peer DMA
[    0.010706] ACPI: DMAR 0x00000000727095D0 0000A8 (v01 INTEL  GLK-SOC  00000003 BRXT 0100000D)
[    0.010790] ACPI: Reserving DMAR table memory at [mem 0x727095d0-0x72709677]
[    0.040517] DMAR: IOMMU enabled
[    0.142508] DMAR: Host address width 39
[    0.142511] DMAR: DRHD base: 0x000000fed64000 flags: 0x0
[    0.142526] DMAR: dmar0: reg_base_addr fed64000 ver 1:0 cap 1c0000c40660462 ecap 9e2ff0505e
[    0.142533] DMAR: DRHD base: 0x000000fed65000 flags: 0x1
[    0.142544] DMAR: dmar1: reg_base_addr fed65000 ver 1:0 cap d2008c40660462 ecap f050da
[    0.142550] DMAR: RMRR base: 0x0000007268d000 end: 0x000000726acfff
[    0.142554] DMAR: RMRR base: 0x00000077800000 end: 0x0000007fffffff
[    0.142559] DMAR-IR: IOAPIC id 1 under DRHD base  0xfed65000 IOMMU 1
[    0.142563] DMAR-IR: HPET id 0 under DRHD base 0xfed65000
[    0.142566] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.144707] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    0.445343] DMAR: No ATSR found
[    0.445345] DMAR: No SATC found
[    0.445348] DMAR: IOMMU feature fl1gp_support inconsistent
[    0.445350] DMAR: IOMMU feature pgsel_inv inconsistent
[    0.445353] DMAR: IOMMU feature nwfs inconsistent
[    0.445355] DMAR: IOMMU feature eafs inconsistent
[    0.445356] DMAR: IOMMU feature prs inconsistent
[    0.445358] DMAR: IOMMU feature nest inconsistent
[    0.445360] DMAR: IOMMU feature mts inconsistent
[    0.445361] DMAR: IOMMU feature sc_support inconsistent
[    0.445363] DMAR: IOMMU feature dev_iotlb_support inconsistent
[    0.445366] DMAR: dmar0: Using Queued invalidation
[    0.445373] DMAR: dmar1: Using Queued invalidation
[    0.446640] DMAR: Intel(R) Virtualization Technology for Directed I/O

After VM starts (with the iGPU not as primary GPU) I see this error message in addition with the same command:
Bash:
[  548.980027] DMAR: DRHD: handling fault status reg 2
[  548.980044] DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x0 [fault reason 0x02] Present bit in context entry is clear

Trying afterwards with this VM as well as another VM just starting a live cd from iso but with the passed-through iGPU as "primary GPU" gives additional errors:
Bash:
[ 1442.230803] DMAR: DRHD: handling fault status reg 2
[ 1442.230826] DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x0 [fault reason 0x02] Present bit in context entry is clear
[ 1497.651088] DMAR: DRHD: handling fault status reg 2
[ 1497.651105] DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x78021000 [fault reason 0x05] PTE Write access is not set
[ 1497.651132] DMAR: DRHD: handling fault status reg 3
[ 1497.651139] DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x78021000 [fault reason 0x05] PTE Write access is not set
[ 1497.651296] DMAR: DRHD: handling fault status reg 2
[ 1497.651303] DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x78022000 [fault reason 0x05] PTE Write access is not set
[ 1497.651324] DMAR: DRHD: handling fault status reg 3
[ 1502.654144] DMAR: DRHD: handling fault status reg 3
[ 1502.654171] DMAR: [DMA Read NO_PASID] Request device [00:02.0] fault addr 0x78220000 [fault reason 0x06] PTE Read access is not set
[ 1502.654922] DMAR: DRHD: handling fault status reg 3
[ 1502.654941] DMAR: [DMA Read NO_PASID] Request device [00:02.0] fault addr 0x78260000 [fault reason 0x06] PTE Read access is not set
[ 1502.655761] DMAR: DRHD: handling fault status reg 3
[ 1502.655794] DMAR: [DMA Read NO_PASID] Request device [00:02.0] fault addr 0x782a0000 [fault reason 0x06] PTE Read access is not set
[ 1502.656560] DMAR: DRHD: handling fault status reg 3
[ 1507.658129] DMAR: DRHD: handling fault status reg 3
[ 1507.658150] DMAR: [DMA Read NO_PASID] Request device [00:02.0] fault addr 0x783e0000 [fault reason 0x06] PTE Read access is not set
[ 1507.658875] DMAR: DRHD: handling fault status reg 3
[ 1507.658889] DMAR: [DMA Read NO_PASID] Request device [00:02.0] fault addr 0x78420000 [fault reason 0x06] PTE Read access is not set
[ 1507.659714] DMAR: DRHD: handling fault status reg 3
[ 1507.659743] DMAR: [DMA Read NO_PASID] Request device [00:02.0] fault addr 0x78460000 [fault reason 0x06] PTE Read access is not set
[ 1507.660514] DMAR: DRHD: handling fault status reg 3
[ 1512.662843] DMAR: DRHD: handling fault status reg 3
[ 1512.662872] DMAR: [DMA Read NO_PASID] Request device [00:02.0] fault addr 0x78020000 [fault reason 0x06] PTE Read access is not set
[ 1512.663489] DMAR: DRHD: handling fault status reg 3
[ 1512.663501] DMAR: [DMA Read NO_PASID] Request device [00:02.0] fault addr 0x780e0000 [fault reason 0x06] PTE Read access is not set
[ 1512.664289] DMAR: DRHD: handling fault status reg 3
[ 1512.664302] DMAR: [DMA Read NO_PASID] Request device [00:02.0] fault addr 0x78120000 [fault reason 0x06] PTE Read access is not set
[ 1512.665124] DMAR: DRHD: handling fault status reg 3
[ 1517.666696] DMAR: DRHD: handling fault status reg 3
[ 1517.666719] DMAR: [DMA Read NO_PASID] Request device [00:02.0] fault addr 0x78260000 [fault reason 0x06] PTE Read access is not set
[ 1517.667484] DMAR: DRHD: handling fault status reg 3
[ 1517.667512] DMAR: [DMA Read NO_PASID] Request device [00:02.0] fault addr 0x782a0000 [fault reason 0x06] PTE Read access is not set
[ 1517.668283] DMAR: DRHD: handling fault status reg 3
[ 1517.668312] DMAR: [DMA Read NO_PASID] Request device [00:02.0] fault addr 0x782e0000 [fault reason 0x06] PTE Read access is not set
[ 1517.669088] DMAR: DRHD: handling fault status reg 3
[ 1522.670666] DMAR: DRHD: handling fault status reg 3
[ 1522.670693] DMAR: [DMA Read NO_PASID] Request device [00:02.0] fault addr 0x78420000 [fault reason 0x06] PTE Read access is not set
[ 1522.671410] DMAR: DRHD: handling fault status reg 3
[ 1522.671429] DMAR: [DMA Read NO_PASID] Request device [00:02.0] fault addr 0x78460000 [fault reason 0x06] PTE Read access is not set
[ 1522.672248] DMAR: DRHD: handling fault status reg 3
[ 1522.672284] DMAR: [DMA Read NO_PASID] Request device [00:02.0] fault addr 0x784a0000 [fault reason 0x06] PTE Read access is not set
[ 1522.673009] DMAR: DRHD: handling fault status reg 3

/etc/default/grub:
Bash:
...
GRUB_CMDLINE_LINUX_DEFAULT="quiet nowatchdog ipv6.disable=1 nofb nomodeset disable_vga=1 intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction initcall_blacklist=sysfb_init video=simplefb:off video=vesafb:off video=efifb:off video=vesa:off vfio_iommu_type1.allow_unsafe_interrupts=1 kvm.ignore_msrs=1 modprobe.blacklist=radeon,nouveau,nvidia,nvidiafb,nvidia-gpu,snd_hda_intel,snd_soc_skl,snd_soc_avs,snd_sof_pci_intel_apl,snd_hda_codec_hdmi,i915 vfio-pci.ids=8086:3185,8086:3198"
GRUB_CMDLINE_LINUX="ipv6.disable=1"
...

Bash:
cat /etc/modprobe.d/kvm.conf
options kvm ignore_msrs=1
options kvm report_ignored_msrs=0

Bash:
cat /etc/modprobe.d/pve-blacklist.conf
# This file contains a list of modules which are not supported by Proxmox VE


# nvidiafb see bugreport https://bugzilla.proxmox.com/show_bug.cgi?id=701
blacklist nvidiafb
blacklist nvidiafb
blacklist snd_hda_intel
blacklist snd_hda_codec_hdmi
blacklist i915
blacklist snd_soc_skl
blacklist snd_sof_pci
blacklist bluetooth

Any ideas where to start the trouble shooting?
 
Last edited:
irrc, it's not possible to passthrough the Intel igpu to to VM to get a display output.
the passthrough allow get hw acceleration. (no Primary gpu ticked)
 
Your link also mentions that an old version of Proxmox is required.
In general, I have no problem with an iGPU Gen11. However, I have big problems with audio, on one PC it doesn't work at all on another only until the VM is rebooted.
The old intel graphics cards up to gen11 also worked with GVT-g. Maybe this is worth a try?
 
Your link also mentions that an old version of Proxmox is required.
In general, I have no problem with an iGPU Gen11. However, I have big problems with audio, on one PC it doesn't work at all on another only until the VM is rebooted.
The old intel graphics cards up to gen11 also worked with GVT-g. Maybe this is worth a try?
Indeed a quite old version... I just did not find it very plausible that a sole, very old version supports it and all following versions drop the feature. I saw a not so old report on this issue on bugzilla for proxmox the other day and tried to register for the bugzilla but never got a verification email so was left stuck on this.

Also for security reasons I would like to avoid reverting back to this old version of Proxmox.

Regarding GVT-g: This site does not list Gemini Lake as a supported platform: https://3os.org/infrastructure/proxmox/gpu-passthrough/igpu-split-passthrough/
Are you confident that this will work with Gemini Lake as well and then even with display output from the VM, not only from Proxmox? Maybe I'll try on the weekend when I get back to the device.
 
Hi @asyncx ,

sorry for the late reply. I did not make it to my test system for a while.
I took all my notes here: https://github.com/DaCHack/Futro-S740-Homeserver

Please let me know if this helps and maybe can even be optimized.
Also happy to incorporate any findings you might have come across!

I am still trying to find a way to work with a newer kernel. Also this approach might struggle with audio via HDMI/DP.
 
Update: I added a full guide to work with the latest Proxmox Version and Kernel and even documented all my steps to prepare a fully dockerized homeserver and HTPC on this platform.
Feedback always welcome. Ideally on the Github issues page or as a pull request.
 
Update: I added a full guide to work with the latest Proxmox Version and Kernel and even documented all my steps to prepare a fully dockerized homeserver and HTPC on this platform.
Feedback always welcome. Ideally on the Github issues page or as a pull request.
Thanks a lot DaCHack for the valuable info on your github. Based on it I manage to do pci passthrough on my Fujitsu S940 (J5005 CPU/ UHD605) with Libreelec 12.01. Just wanted to know - do you have any errors in you proxmox server after running a node using the video card ?
in my case I got these :
Code:
DMAR: DRHD: handling fault status reg 2
DMAR: [DMA Write NO_PASID] Request device [00:02.0] fault addr 0x0 [fault reason 0x02] Present bit in context entry is clear
 
Hi @vertycall ,
not anymore. I had it in the beginning with an old Kernel and different settings. (See my first post)
Not sure what helped in the end but check again on the VM settings. Particularly deactivate memory balloning. I have the feeling this was a major part of the trick.

The link on top of my Github Shows a couple of Checks and troubleshoots you may want to check in addition.
 
  • Like
Reactions: vertycall
  • Like
Reactions: DaCHack
@DaCHack
Thanks for the guide, I have been trying to get this working for some time.
Did you try copying the i915_vbt from host (before passthrough activated) to the VM guest as described in this thread
and the linked bugzilla
Thanks for the hint. I am still on 6.8.12-2 and it is working fine. Seems like I might be approaching the same issue later.

Why are you guys already running on 6.10? Proxmox tells me that I am all uptodate with 6.8.12…
 
I am still on 6.8.12-2 and it is working fine. Seems like I might be approaching the same issue later.

Why are you guys already running on 6.10? Proxmox tells me that I am all uptodate with 6.8.12…
I think it is an issue with the VM kernel version not Proxmox Hypervisor kernel version:
6.9.12 is working correctly.
6.10-rc1 onward will have black screen and HDMI not detecting any output
 
I think it is an issue with the VM kernel version not Proxmox Hypervisor kernel version:
6.9.12 is working correctly.
6.10-rc1 onward will have black screen and HDMI not detecting any output
Oh good to know. My VMs with Debian stable are still at 6.1.206-2 so I did not expierience this yet. Are you Running debian Testing or sth Else?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!