intel uh630 igp passthrough and hdmi output to host screen

fubarovich

New Member
Mar 21, 2022
2
0
1
40
Hello. Over the last week I've spend many hours trying to sort through guides/forums/bugs to get igpu passthrough w/ hdmi out to a linux guest. The closest I got was with a PopOS 21 guest, it actually shows UH630 under lspci:

00:02.0 VGA compatible controller: Intel Corporation CometLake-S GT2 [UHD Graphics 630] (rev 03)

However, as soon as the VM starts it's immediately using all CPU resources, becomes unstable/crashes and proxmox host kernel logs are spammed with:

host:
vfio-pci 0000:00:02.0: BAR 2: can't reserve [mem 0x90000000-0x9fffffff 64bit pref]

Can someone please clue me in if what I'm trying to achieve is actually possible, and if so what am I missing? thank you!
 
Last edited:
When using PCI passthrough the passed through device needs to be able to access all of the guests RAM at any time because of direct memory access (DMA). This is also why ballooning won't work when using PCI passthrough. "BAR" in the logs probably stand for the "Base Address Registers" which is also part DMA. And the logs tell it can't reserve something, so I guess the GPU of the VM wan't to reserve RAM because of DMA and can't do this.

You could try to lower/increase the VMs RAM. Maybe you got too much or not enough. You also might try to disable ballooning in case you enabled it. No idea if that helps, but can't hurt to just test it.
 
Last edited:
When using PCI passthrough the passed through device needs to be able to access all of the guests RAM at any time because of direct memory access (DMA). This is also why ballooning won't work when using PCI passthrough. "BAR" in the logs probably stand for the "Base Address Registers" which is also part DMA. And the logs tell it can't reserve something, so I guess the GPU of the VM wan't to reserve RAM because of DMA and can't do this.

You could try to lower/increase the VMs RAM. Maybe you got too much or not enough. You also might try to disable ballooning in case you enabled it. No idea if that helps, but can't hurt to just test it.
Thanks for your reply, I've tried changing VM memory settings from 8GB to 3, no change in regards to the errors above. I'm starting to think IGPU passthrough is not (yet) possible.
 
igpu was possible when i was using i8400 few years ago.

but after i upgraded to i12400 early this year, i never got it working, although it passthrough correctly and successfully with VM desktop showing on monitor, however the spped is at like 1fps.
 
Hello,

I got same kind of issue after upgrade from proxmox 6.4 to 7.2 with intel i7 8700 ( intel uhd 630 ) and solved it with following steps :

1- Edit grub config ( nano /etc/default/grub ) with following content :
Bash:
# If you change this file, run 'update-grub' afterwards to update

# /boot/grub/grub.cfg.

# For full documentation of the options in this file, see:

# info -f grub -n 'Simple configuration'



GRUB_DEFAULT=0

GRUB_TIMEOUT=5

GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on i915.enable_gvt=1 iommu=pt video=efifb:off video=vesafb:off"

GRUB_CMDLINE_LINUX="consoleblank=10 loglevel=3"



# Uncomment to enable BadRAM filtering, modify to suit your needs

# This works with Linux (no patch required) and with any kernel that obtains

# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)

#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"



# Uncomment to disable graphical terminal (grub-pc only)

#GRUB_TERMINAL=console



# The resolution used on graphical terminal

# note that you can use only modes which your graphic card supports via VBE

# you can see them in real GRUB with the command `vbeinfo'

#GRUB_GFXMODE=640x480



# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux

#GRUB_DISABLE_LINUX_UUID=true



# Uncomment to disable generation of recovery mode menu entries

#GRUB_DISABLE_RECOVERY="true"



# Uncomment to get a beep at grub start

#GRUB_INIT_TUNE="480 440 1"
2- Run command : update-grub
3- Load modules ( nano /etc/modules )
Bash:
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.

# Modules required for PCI passthrough
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

# Modules required for Intel GVT-g Split
kvmgt
4-Blacklist module ( nano /etc/modprobe.d/blacklist.conf )
Bash:
blacklist snd_hda_intel
blacklist snd_hda_codec_hdmi
blacklist i915
5-Edit file /etc/modprobe.d/vfio.conf ( nano /etc/modprobe.d/vfio.conf ) and add your pci address
Code:
options vfio-pci ids=XXXX:YYYY
Replace XXXX:YYYY by your pci device address retrieved using lspci -n -s 00:02
Value 00:02 is retrieve by looking outpoot of lspci and taking value corresponding to device ( in my case : VGA compatible controller: Intel Corporation UHD Graphics 630 (Desktop) )

If you have 2 devices to passthrough ( like IGD + audio ), simply specify
Code:
options vfio-pci ids=XXXX:YYYY,AAAA:BBBB
6-Edit file /etc/modprobe.d/kvm.conf ( nano /etc/modprobe.d/kvm.conf ) and add following :
Bash:
options kvm ignore_msrs=1
7-Create folder /var/lib/vz/snippets : mkdir /var/lib/vz/snippets
8-Create hookup script and add following ( nano /var/lib/vz/snippets/gpu-hookscript.sh ) :
Bash:
#!/bin/bash

if [ $2 == "pre-start" ]
then
    echo "gpu-hookscript: unloading GPU driver for Virtual Machine $1"
    modprobe -r i915
fi
9-Make script executable : chmod +x /var/lib/vz/snippets/gpu-hookscript.sh
10-Supposing you already have your VM configured, add ( do not replace ) following to your VM config file ( nano /etc/pve/qemu-server/<id>.conf ) :
Bash:
hookscript: local:snippets/gpu-hookscript.sh
vga: none
hostpci1: 0000:00:XX,x-vga=1
Replace 0000:00:XX by your device address from lspci ( in my case : 0000:00:02,x-vga=1 )
11- Install kernel version 5.13.19-5 ( apt install pve-kernel-5.13.19-5-pve )
12- Configure boot on kernel 5.13.19-5 : ( proxmox-boot-tool kernel pin 5.13.19-5-pve )
13- Reboot : reboot

If you want to upgrade kernel to newest version later, here are the commands :
Bash:
proxmox-boot-tool kernel unpin
reboot

Credits goes to several support threads :
https://pve.proxmox.com/wiki/PCI(e)_Passthrough
https://forum.proxmox.com/threads/gpu-passthrough-issues-after-upgrade-to-7-2.109051/
https://forum.proxmox.com/threads/gpu-passthrough-on-7-2-stopped-working-since-upgrade.109514/
https://forum.proxmox.com/threads/problem-with-gpu-passthrough.55918/page-2
https://forum.proxmox.com/threads/proxmox-7-1-gpu-passthrough.100601/
https://forum.proxmox.com/threads/gpu-passthrough-not-working-bar-3.60996/#post-290145

On my installation, method of device reset indicated on other threads was causing host to crash ( code below ) :
Bash:
echo 1 > /sys/bus/pci/devices/0000\:09\:00.0/remove
echo 1 > /sys/bus/pci/rescan

For an unknown reason, module i915 was loaded despite blacklist which was preventing VM from displaying pictures. ( Visible with lsmod | grep i915 ). This is the reason for the adapted gpu-hookscript.sh

I was not able to make passthrough working with default kernel but there is a known issue with simplefb indicated on the documentation : https://pve.proxmox.com/wiki/Roadmap

Maybe kernel downgrade could be removed in the future.
 
Hi people,

I don't mean to hijack the thread, but I have a couple of questions that I’m hopeful somebody here might be able to help with…

Hardware: Core i5 9500 - (630 UHD iGPU)

I’m running the latest version of Proxmox and have successfully passed the iGPU through to a Windows VM/guest (at least as far as I can tell). I can RDP to the VM and the GPU shows up in device manager as normal and applications can use the device.

However, there is no display from the Display Port output on the host PC. I have not tried the VGA (D-Sub) connector.

Is this the normal/expected behaviour when doing full GPU passthrough for an integrated/intel GPU on KVM/Proxmox? Or, is the output restricted to VGA only? Or is it normal for there to be no output at all and the GPU is used exclusively in the VM for acceleration, but not intended for output to a monitor?

If it’s normal, I can see how this could be really useful - for using QuickSync and plenty more besides.

For my particular use case, I was hoping to use a VM that’s connected via the iGPU to the host display.

Any help on clarifying what’s expected and/or what’s possible would be much appreciated.
 
Last edited:
Not sure about iGPUs but graphic cards should output the desktop of your Win VM when using PCIe passthrough. Check that you installed the GPU drivers in the WinVM and check that the correct output is used.
 
Hey 'Dunuin' - thanks for the reply.

I've used AnyDesk to connect to the guest - as the display settings are a bit different when connected via RDP.

Everything looks normal - more or less. The only thing that seems 'off' is that device manager shows a monitor connected, but it's just a generic UPNP type and I'm unable to change resolution. Resolutions other than 1024x768 are listed, but when selected a different one, it immediately reverts back (to 1024x768).

In the Intel Graphics Command Centre it's pretty much the same story - all looks fine (it displays details about the GPU etc), but here I'm also unable to successfully change resolution.

I suspect if I connect a monitor to the VGA connector I would get an output. I'll have to dig the monitor out and test.

Still not sure if this is normal operation for iGPU passthrough.
 
I suspect if I connect a monitor to the VGA connector I would get an output. I'll have to dig the monitor out and test.
I find it unlikely that it would output VGA but not output on any other connection. Unless you have more displays connected that it can handle, then some of them might be disabled, but this does not seem to be your situation.
Still not sure if this is normal operation for iGPU passthrough.
It isn't. If your doing PCIe passthrough, it should act to the VM as it would act to a physical machine (and it only works for that single VM). If you're using mediated device passthrough, where multiple VMs can use the GPU at the same time, then I suspect none of the physical outputs work.
 
Did you disable the virtual GPU in the VM settings so you are not actually using the virtual GPU and misinterpreting that as your iGPU?
For the default virtual GPU you would be limited to a fixed resolution defined in the VMs UEFI.
 
Last edited:
Hey 'Leesteken' thanks for the reply and for the clarification - it's a big help. I suspected that this is the case, but wanted to be certain before proceeding.

I followed this guide. https://3os.org/infrastructure/proxmox/gpu-passthrough/igpu-passthrough-to-vm/

I don't seem to have any error messages that could help with troubleshooting. Does anyone know if any of the kernel downgrades/upgrades might address this issue.

I have searched around, but finding up to date information on this can be tricky.
 
Did you disable the virtual GPU in the VM settings so you are not actually using the virtual GPU and misinterpreting that as your iGPU?
For the default virtual GPU you would be limited to a fixed resolution defined in the VMs UEFI.

I did indeed disable the virtual GPU, so the only GPU (in theory) that the VM can currently access is the GPU that has been passed through (Intel 630 UHD). Thanks for the suggestion.
 
I've just passed through the iGPU to a new Windows 11 installation/guest-os, but get exactly the same behaviour. iGPU shows up in device manager, applications can use the iGPU, but no output from the DP out.

I'm able to set any resolution, up to and including, 1024x768, but nothing higher.
 
What motherboard are you using? Are you sure that display port is a output and not an input? These days motherboards sometimes got a display port input and use USB C as a video output.
 
iGPU Passthrough:
If you desire monitor output from an IGD/iGPU/Intel GPU,I'm using an Intel 530 from a i7-6700, I've only been successful using HDMI not vga or displayport, then you must use legacy mode and not GVT-g. Took me a year to figure this all out lol. I finally got it working a week ago and it is now finally stable for 3 days now. I'd reference all the different links I used but none of them worked in whole, I had to use parts of each. at some point in the next month I hope to find the time to make a complete post of how I finally got it working in an acceptable way, granted it keeps stable, I want to give it time to prove itself, then I'll buy more ram and utilize it for my home lab services like Plex. I'm rushing this info. Good Luck. I'll check back and add if I can help.

# if your processor > broadwell then, (supports upt mode | But doesn't support "vga output"/Monitor output)
machine:q35
# if your processor >= sandy then, (supports "vga output"/HDMI output to a monitor)
machine: pc-i440fx
 
Last edited:
I've just passed through the iGPU to a new Windows 11 installation/guest-os, but get exactly the same behaviour. iGPU shows up in device manager, applications can use the iGPU, but no output from the DP out.

I'm able to set any resolution, up to and including, 1024x768, but nothing higher.
Do you have host machine outpout via the display port at startup ( bios screen and grub first outputs ) ?
On my side, I use HDMI with an ubuntu guest ( so if there is any limit to VGA it may depend on hardware ) and can have full HD output.

Does command dmesg -Tw shows any error containing "BAR 2: can't reserve" ?
Did you tried with kernel 5.13.19-5 on your host ( I got issue after upgrade to last one ) ?

In case you want to try, use following commands on your host ( require reboot ) :

Bash:
apt install pve-kernel-5.13.19-5-pve
proxmox-boot-tool kernel pin 5.13.19-5-pve
reboot

To rollback kernel change, use following commands :

Bash:
proxmox-boot-tool kernel unpin
reboot
 
Last edited:
What motherboard are you using? Are you sure that display port is a output and not an input? These days motherboards sometimes got a display port input and use USB C as a video output.
I'm dual booting the PC between Windows 11 and Proxmox (on different SSDs) and when in Windows, I'm using the DP out from the PC - all fine.

When booting into Proxmox, the same connector is outputting the console. The display goes blank halfway through the boot process (well towards the end), as expected.

When I connect to Proxmox using a browser, all is fine. I then start the VM and once it has booted, I connect to it via AnyDesk or RDP. The GPU shows up fine in device manager with no warnings or errors etc, but no output.
 
Do you have host machine outpout via the display port at startup ( bios screen and grub first outputs ) ?
On my side, I use HDMI with an ubuntu guest ( so if there is any limit to VGA it may depend on hardware ) and can have full HD output.

Does command dmesg -Tw shows any error containing "BAR 2: can't reserve" ?
Did you tried with kernel 5.13.19-5 on your host ( I got issue after upgrade to last one ) ?

In case you want to try, use following commands on your host ( require reboot ) :

Bash:
apt install pve-kernel-5.13.19-5-pve
proxmox-boot-tool kernel pin 5.13.19-5-pve
reboot

To rollback kernel change, use following commands :

Bash:
proxmox-boot-tool kernel unpin
reboot

Thanks for the info, I'll definitely give these a go and post back here with my results. :)

BTW, the PC is a HP ProDesk 400 G6 SFF, (i5 9500) - it only has a DP out and a VGA out (no HDMI). I still haven't tried the VGA output, as getting to the monitor is a bit tricky, but I'll try that soon.
 
Last edited:
The only "BAR 2" reference from dmesg -Tw reads:

pci 0000:00:02.0: BAR 2: assigned to efifb

I'm guessing that is okay - going to try the kernel....
 
I probably should have mentioned this earlier (probably normal, not sure), but when Proxmox boots (with either kernel) it displays:

Loading initial ramdisk ...

This stays on the screen (from DP output) until I start a VM that has the GPU passed through, then the signal from DP is cut off immediately after starting the VM. Even when the VM has finished booting there's no display.

Unfortunately, it's the same with both kernels. Thanks for the suggestion.

I'm not sure what else I should/can try, but happy to give anything a go.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!