Intel NUC iGPU passthrough: working in Linux guest but not in Windows 10 guest

Flatline

Member
Jun 23, 2020
8
0
6
40
Hi everyone,
as by subject I am struggling with iGPU passthrough to a Windows 10 guest.
I have a NUC8i3BEH with an integrated Iris Plus 655. I already updated to the latest Proxmox and BIOS.

My configuration is as follows:

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on video=efifb:eek:ff vfio-pci.ids=8086:3ea5"

blacklist snd_hda_intel
blacklist snd_hda_codec_hdmi
blacklist snd_sof_pci
blacklist i915

agent: 1
bios: ovmf
boot: c
bootdisk: scsi0
cores: 4
cpu: host,hidden=1,flags=+pcid
efidisk0: local-lvm:vm-100-disk-1,size=4M
hostpci0: 00:02,pcie=1,rombar=0,x-vga=1
ide2: local:iso/virtio-win-0.1.171.iso,media=cdrom,size=363020K
machine: q35
memory: 8192
name: windows-garage
net0: virtio=1A:63:4B:38:D7:80,bridge=vmbr0,firewall=1
numa: 0
ostype: win10
scsi0: local-lvm:vm-100-disk-0,discard=on,size=80G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=f2ed9134-9dc4-4fe9-a4a9-4c6ca10ad19b
sockets: 1
usb0: host=046d:c52b,usb3=1
usb1: host=3938:1166,usb3=1
usb2: host=1-4.1.4,usb3=1
vga: none


The Iris card is recognized by Windows 10, however in the device manager it has the infamous "Error 43". I am attaching a screenshot taken via remote desktop (NoMachine if it does matter):

Annotazione 2020-06-23 192357.png

In an Ubuntu VM with the same hostpci configuration the passthrough works flawlessy.

I had a look around and tried various other configurations (most of them using the -args config parameter), but nothing worked.


A couple of threads (see: Thread_1 and Thread_2) point to the fact that there is an issue just being fixed in qemu, but I am not sure it does apply to me (and still can't see why it would work in Linux but not in Windows).


So... anyone can help me? Thanks! :)
 
Hi Flatline,

I've got a NUB8I7BEH and manage to have it working… Until Proxmox 6.2. I'm still looking for a stable solution.

You should not use :
Code:
hostpci0: 00:02,pcie=1,rombar=0,x-vga=1

But instead :
Code:
args: -device vfio-pci,host=00:02.0,addr=0x18,x-vga=on,x-igd-opregion=on


Anyway, this config was working great with Proxmox 6.1 but since 6.2 if I don't do anything with the VM, after 1 or 2 hours, the windows driver igfx auto removes and I've got the famous "Error 43". Then I need to suppress the graphic card from Windows VM and reboot.

Another problem since Proxmox 6.2: the CPU lockup came back (Don't know if you have the same issue). I made some tuning :
https://www.suse.com/support/kb/doc/?id=000018705

But then I've got some flapping on my NIC going down and up when there is some CPU intensive things (proxmox backup / starting Windows VM)

I Don't know if all these are related to the same root cause...
Let me know if you see the same issues
 
  • Like
Reactions: mediacj
Anyway, this config was working great with Proxmox 6.1 but since 6.2 if I don't do anything with the VM, after 1 or 2 hours, the windows driver igfx auto removes and I've got the famous "Error 43". Then I need to suppress the graphic card from Windows VM and reboot.

Thanks, I will try but this is not going to work for me, I need the VM to be "always on and ready"... and as I have an Ubuntu VM perfectly working for the moment I'll stick to that!


But then I've got some flapping on my NIC going down and up when there is some CPU intensive things (proxmox backup / starting Windows VM)

I Don't know if all these are related to the same root cause...
Let me know if you see the same issues

I haven't managed to run the WIndows VM long enough to see that, albeit I have got to see that when accessing with NoMachine it is a bit unresponsive at the startup, however as my NUC is far away and connected via powerline it could be due to the slow connection.

In the ubuntu VM I haven't seen any of these issues (and the VM has been running for weeks without reboots).

I may add that I had seen the NIC go down only when I tried to passthrough the sound card, but this was a fault on my side (the soundcard has the same IOMMU of the NIC).
 
I'm bumping this thread, hopefully to see if someone managed to solve the issue.

I should add that now (after updating proxmox) even the Ubuntu VM is not working anymore: basically it displays the GDM login screen for a little while (but the system is not responding to mouse or keyboard) and then simply the image goes off, while the VM is still working (I can SSH into it). Once in dmesg I read a "GPU HANG" error.
 
Hello,

I don't see Error 43 anymore since last update (5.4.44-2-pve), but still some random crash (soft lockup). Not sure it's related to GPU passthrough.
Have you tried booting your VM with nomodeset and/or i915.modeset=0 grub option ?
 
Last edited:
Hello,

I don't see Error 43 anymore since last update, but still some random crash (soft lockup). Not sure it's related to GPU passthrough.
Have you tried booting your VM with nomodeset and/or i915.modeset=0 grub option ?


Not yet, but in the many permutations I tried perhaps I missed that one.

Just to be sure, would you be so kind to post your various /etc/default/grub, /etc/modules, /etc/modules.d/*... as well as the VM config?
 
I don't run Linux VM with GPU passthru, only win10

I meant the configuration of the proxmox machine :)





just used this to add GPU to the VM :
Code:
args: -device vfio-pci,host=00:02.0,addr=0x18,x-vga=on,x-igd-opregion=on

So basically you are not using proxmox ui to configure the passthrough ("hostpci0"), but you manually add the args: to the VM config, correct?
 
Hi all, did someone finally got the GPU passthrough into the VM?

I have a NUC8i5BEK and was right now able to get the Intel Iris Plus Graphics 655 passthrough into a Win10 VM.
But the GPU is not listed / not active and therefore OpenGL is not available.

Did someone get the GPU active within the VM? What step did I miss?

Below my configuration and how it looks in VM:

blacklist i915
blacklist snd_hda_intel
blacklist snd_hda_codec_hdmi

vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

GRUB_CMDLINE_LINUX_DEFAULT="quiet i915.enable_gvt=1 intel_iommu=on video=efifb:eek:ff vfio-pci.ids=8086:3ea5"

options vfio-pci ids=8086:3ea5 disable_vga=1

00:00.0 Host bridge: Intel Corporation 8th Gen Core Processor Host Bridge/DRAM Registers (rev 08)
00:02.0 VGA compatible controller: Intel Corporation Iris Plus Graphics 655 (rev 01)
00:08.0 System peripheral: Intel Corporation Skylake Gaussian Mixture Model
00:12.0 Signal processing controller: Intel Corporation Cannon Point-LP Thermal Controller (rev 30)
00:14.0 USB controller: Intel Corporation Cannon Point-LP USB 3.1 xHCI Controller (rev 30)
00:14.2 RAM memory: Intel Corporation Cannon Point-LP Shared SRAM (rev 30)
00:14.3 Network controller: Intel Corporation Cannon Point-LP CNVi [Wireless-AC] (rev 30)
00:16.0 Communication controller: Intel Corporation Cannon Point-LP MEI Controller (rev 30)
00:17.0 SATA controller: Intel Corporation Cannon Point-LP SATA Controller [AHCI Mode] (rev 30)
00:1c.0 PCI bridge: Intel Corporation Cannon Point-LP PCI Express Root Port (rev f0)
00:1c.4 PCI bridge: Intel Corporation Cannon Point-LP PCI Express Root Port (rev f0)
00:1d.0 PCI bridge: Intel Corporation Cannon Point-LP PCI Express Root Port (rev f0)
00:1d.6 PCI bridge: Intel Corporation Cannon Point-LP PCI Express Root Port (rev f0)
00:1f.0 ISA bridge: Intel Corporation Cannon Point-LP LPC Controller (rev 30)
00:1f.3 Audio device: Intel Corporation Cannon Point-LP High Definition Audio Controller (rev 30)
00:1f.4 SMBus: Intel Corporation Cannon Point-LP SMBus Controller (rev 30)
00:1f.5 Serial bus controller [0c80]: Intel Corporation Cannon Point-LP SPI Controller (rev 30)
00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (6) I219-V (rev 30)
3b:00.0 Non-Volatile memory controller: Marvell Technology Group Ltd. Device 1092
3c:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS522A PCI Express Card Reader (rev 01)

agent: 1
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 2
cpu: host,flags=+pcid
efidisk0: local-lvm:vm-400-disk-0,size=4M
hostpci0: 00:02,rombar=0,x-vga=1
ide2: none,media=cdrom
machine: q35
memory: 8192
name: Win10.Development.CLONE
net0: virtio=8A:99:26:20:1A:7E,bridge=vmbr0,firewall=1,tag=10
numa: 0
ostype: win10
scsi0: local-lvm:vm-400-disk-1,cache=writeback,discard=on,size=40G
scsihw: virtio-scsi-pci
smbios1: uuid=c111a877-ca1b-4caf-96c6-f983cbc5aed8
sockets: 2
vga: none
vmgenid: 12f6ea43-b8c7-4973-84d7-75912ed81292

GPU_passthrough.PNG
 
Hi,

I managed to make it working on NUC8I7BEH.
In my opinion, on NUC, Intel IGD is not EFI.
I managed to make it working with SeaBIOS but did not really tried OVMF

Everything is working fine as long as I don't upgrade the default windows driver.
Anytime I'm trying to upgrade IGD driver (with the one downloaded on intel support website) I got the famous error 43
 
@Adrianos712, I got the very similar NUC 8I5BEK with Intel Iris 655 and I tried all possible ways to pass through the IGPU to a Windows 10 guest but couldn't make it work yet. I tried with OVMF and SeaBios but end up with the graphics card being recognized by the Windows guest system, however, in the device manager there is Code 43.

Could you please share your settings that worked for you in the same way as @TorstenGiese did before? And please also add what you set in the BIOS.

Also, can you please tell us whether you see the BIOS and GRUB on the proxmox host when booting up? Because I see both and then during the boot, after GRUB, there are a few lines from the boot process but then the screen goes black even before starting the client VMs. My current guess is that there is some conflict because the IGPU is already "initialized" by the host and then cannot be used/initialized anymore by the guest system.

Threads I already checked, but didn't resolve the issue for me:
- https://forum.proxmox.com/threads/help-needed-for-using-gpu-passthrough-on-intel-nuc.69561/
- https://forum.proxmox.com/threads/igd-passthrough-almost-working.60989/
- https://forum.proxmox.com/threads/guide-intel-intergrated-graphic-passthrough.30451/
- https://pve.proxmox.com/wiki/Pci_passthrough

However, I understand that something changed in the latest versions of Proxmox, as described here, which might prevent these older solution paths to succeed.

These are my current settings:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on pcie_acs_override=downstream video=efifb:eek:ff,vesafb:eek:ff"
GRUB_CMDLINE_LINUX="vfio-pci.ids=8086:3ea5"
Adding vfio-pci.ids=8086:3ea5 solved some errors I had with PTE Write access is not set as suggested here.
blacklist snd_hda_intel
blacklist snd_hda_codec_hdmi
blacklist i915
blacklist sof_pci_dev
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
[ 0.000000] Warning: PCIe ACS overrides enabled; This may allow non-IOMMU protected peer-to-peer DMA
[ 0.011223] ACPI: DMAR 0x0000000079E5D010 0000A8 (v01 INTEL NUC8i5BE 00000055 01000013)
[ 0.068703] DMAR: IOMMU enabled
[ 0.140708] DMAR: Host address width 39
[ 0.140710] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[ 0.140716] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e
[ 0.140717] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[ 0.140721] DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da
[ 0.140722] DMAR: RMRR base: 0x00000079da4000 end: 0x00000079dc3fff
[ 0.140722] DMAR: RMRR base: 0x0000007b800000 end: 0x0000007fffffff
[ 0.140725] DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1
[ 0.140725] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[ 0.140726] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[ 0.142809] DMAR-IR: Enabled IRQ remapping in x2apic mode
[ 1.235812] DMAR: No ATSR found
[ 1.235860] DMAR: dmar0: Using Queued invalidation
[ 1.235864] DMAR: dmar1: Using Queued invalidation
[ 1.247482] DMAR: Intel(R) Virtualization Technology for Directed I/O
[ 0.140726] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[ 0.142809] DMAR-IR: Enabled IRQ remapping in x2apic mode
Currently, I do not have a "vfio.conf" file according to here, but before I tried with this one:
options vfio-pci ids=8086:3ea5
options vfio-pci ids=8086:9dc8
Here are the relevant parts of my windows client configuration:
...
cpu: host,hidden=1,flags=+pcid
hostpci0: 00:02,pcie=1,x-vga=1
machine: q35
...

Thanks in advance.
 
Last edited:
Hi @miqu,

I don't have access to my proxmox rigth now, but you have to try:
Code:
-device vfio-pci,host=00:02.0,x-igd-opregion=on
bios: seabios
instead of
Code:
hostpci0: 00:02,pcie=1,x-vga=1
bios: ovmf

I also disable Legacy booting mode in the BIOS but I'm not really sure if it's relevant

Let me konw if it's working, ro I will try to extract the whole setup from my proxmox
 
Thanks for your reply. That change alone still results in Code 43. So I suspect something else in my config differs from yours.

2021-01-07 15_37_46-proxmox_code43.png

Here's my current vm config:
args: -device vfio-pci,host=00:02.0,x-igd-opregion=on
agent: 1
bios: seabios
boot: order=scsi0;net0
cores: 6
cpu: host,hidden=1,flags=+pcid
machine: q35
memory: 8192
name: windows
net0: virtio=EE:A3:FF:25:67:D6,bridge=vmbr0,firewall=1
numa: 0
ostype: win10
scsi0: local-lvm:vm-304-disk-0,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=63f75f58-30fb-4e09-a961-72d29fb7dc6d
sockets: 1
vmgenid: c8d2ba3d-41b1-48f3-b147-d0af063f355a

I also disable Legacy booting mode in the BIOS but I'm not really sure if it's relevant
I have the same since yesterday already and noticed no difference

Looking forward to compare with your settings.
 
Here are the diff :
Code:
GRUB_CMDLINE_LINUX_DEFAULT="consoleblank=0 intel_iommu=on vfio-pci.ids=8086:3ea5 video=efifb:off video=vesafb:off"
GRUB_CMDLINE_LINUX=""


Code:
agent: 1
args: -device vfio-pci,host=00:02.0,addr=0x18,x-igd-opregion=on
bootdisk: scsi0
cores: 6
cpu: host
machine: q35
memory: 8192
name: Windows10
net0: virtio=EA:6D:C5:46:4A:75,bridge=vmbr0
numa: 0
ostype: win10
sata2: local:iso/virtio-win.iso,media=cdrom,size=363020K
scsi0: local:107/vm-107-disk-0.qcow2,cache=writeback,discard=on,size=60G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=3a4689de-e278-47c9-9532-abf6a455f03b
sockets: 1
tablet: 0
vmgenid: 085e394d-df18-47e8-ad24-2ddab895cd0b


This is not needed anymore since Kernel 5.4
Code:
options vfio-pci ids=8086:3ea5
options vfio-pci ids=8086:9dc8

What is the driver version for iris plus in your Win10 VM ?
 
note that I forgot something really important in m previous message : addr=0x18 in the arg section.
It surely can be that ! :)
 
Thanks again for your reply.
I adjusted my config in the same way as you suggested:
- Edited the two lines in the grub config according to your example
- Removed the vfio.conf
- Applied update-grub and update-initramfs
- Adjusted the vm conf file (with the addr)
- Rebooted
- Started the Windows VM

At that point the error code was gone and the IGPU was shown as working in the device manager of the Windows guest. However, I did not get any output over HDMI on my screen. Happily, I shut down the instance and took a backup. Then booted up again, and what should I say, the error code 43 was back. And it remains, even after a reboot of the proxmox host. It's really weird.

2021-01-07 15_37_46-proxmox_code43_driver.png


Driver Date: 05/09/2020
Driver Version: 27.20.100.8681
 
Can you try a fresh install of the VM now ? Or start to uninstall the driver ?
Mine is older than that (2019 if I remeber ?). Each time I'm trying a driver update I lose go back to error 43...
 
So, I can get rid of the Code 43 error by removing addr=0x18 from the conf file. After that I can boot successfully, however, still not get any output from the HDMI. I tried this now also several times and can reproduce it reliably. With this setup, I need to note, I am able to access the VM via Proxmox console, there is no need for RDP.

2021-01-07 15_37_46-proxmox_NO_code43_driver.png

Can you tell me what addr=0x18 actually means? Maybe I need to provide a different value here that is custom for my setup?

What I tried after that was disabling the "Microsoft Basic Display Adapter" in the device manager and then reboot. As a consequence I finally got output on the HDMI, but that were just multiple BSOD with different error messages related to e.g. IRQ.

I then reverted to a backup and tried to install the IGPU driver from Intel and rebooted. This resulted again in Code 43, just like you described for your setup.

So now I am currently stuck with an IGPU that is recognized by the device manager, but that that does not seem to get used and I cannot find a way to make it the default graphics adapter. How does this look for your setup?

Again, thanks for helping out!
 
I'm not pretty sure about what is doing address=0x18, but I think it is related with PCI bus structure of the VM.

Have you tryed a fresh install of windows with my setup ? My VM only runs with old IFGX drivers (the one from the windows fresh install)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!