intel uh630 igp passthrough and hdmi output to host screen

Did you see the "Known Issues in the PVE 7.2 Changelog?
PCI(e) pass through related:
  • Systems passing through a GPU may be affected from the switch to the SYS_FB (system frame buffer) KConfig build options using the simplefb module as driver in the new default 5.15 based kernel.The sys-fb allows taking over the FB from the firmware/earlier boot stages. Note that Proxmox VE uses the legacy simplefb driver over the modern simpledrm one due to regressions and issues we encountered on testing with the latter.Most of those issues are already fixed in newer kernels and Proxmox VE may try to switch to the modern, DRM based FB driver once it moves to 5.17, or newer, as its default kernel.If your systems is configured to pass through the (i)GPU, and you had to avoid the host kernel claiming the device, you may now need to also add video=simplefb:off to the kernel boot command line.
 
I probably should have mentioned this earlier (probably normal, not sure), but when Proxmox boots (with either kernel) it displays:

Loading initial ramdisk ...

This stays on the screen (from DP output) until I start a VM that has the GPU passed through, then the signal from DP is cut off immediately after starting the VM. Even when the VM has finished booting there's no display.

Unfortunately, it's the same with both kernels. Thanks for the suggestion.

I'm not sure what else I should/can try, but happy to give anything a go.
I have same behaviour with Loading initial ramdisk using this on my grub config :

Bash:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on i915.enable_gvt=1 iommu=pt video=efifb:off video=vesafb:off"

GRUB_CMDLINE_LINUX="consoleblank=10 loglevel=3"

Maybe you can try to edit your grub config ( it seems different on your link ) and also add kvmgt to your /etc/modules
I also don't see the part with /etc/modprobe.d/vfio.conf ( not sure if it is mandatory but you can still try )

Also, once you started the VM, ensure that module i915 is not loaded on host ( I had it loaded despite blacklist ). Check with command lsmod | grep i915

If it is loaded, try : modprobe -r i915. On my case, output was coming after that ( only with kernel 5.13.19-5 but maybe I should edit my grub with what indicated by Dunuin which will give :
Bash:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on i915.enable_gvt=1 iommu=pt video=efifb:off video=vesafb:off video=simplefb:off"

GRUB_CMDLINE_LINUX="consoleblank=10 loglevel=3"
)
 
Last edited:
Does video=simplefb:off actually work now? Because it did not actually fix the problem in earlier 5.15 kernels and initcall_blacklist=sysfb_init was needed instead, when doing passthrough of the GPU that was also used during boot of the machine.
 
Hello,

I got same kind of issue after upgrade from proxmox 6.4 to 7.2 with intel i7 8700 ( intel uhd 630 ) and solved it with following steps :

1- Edit grub config ( nano /etc/default/grub ) with following content :
Bash:
# If you change this file, run 'update-grub' afterwards to update

# /boot/grub/grub.cfg.

# For full documentation of the options in this file, see:

# info -f grub -n 'Simple configuration'



GRUB_DEFAULT=0

GRUB_TIMEOUT=5

GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on i915.enable_gvt=1 iommu=pt video=efifb:off video=vesafb:off"

GRUB_CMDLINE_LINUX="consoleblank=10 loglevel=3"



# Uncomment to enable BadRAM filtering, modify to suit your needs

# This works with Linux (no patch required) and with any kernel that obtains

# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)

#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"



# Uncomment to disable graphical terminal (grub-pc only)

#GRUB_TERMINAL=console



# The resolution used on graphical terminal

# note that you can use only modes which your graphic card supports via VBE

# you can see them in real GRUB with the command `vbeinfo'

#GRUB_GFXMODE=640x480



# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux

#GRUB_DISABLE_LINUX_UUID=true



# Uncomment to disable generation of recovery mode menu entries

#GRUB_DISABLE_RECOVERY="true"



# Uncomment to get a beep at grub start

#GRUB_INIT_TUNE="480 440 1"
2- Run command : update-grub
3- Load modules ( nano /etc/modules )
Bash:
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.

# Modules required for PCI passthrough
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

# Modules required for Intel GVT-g Split
kvmgt
4-Blacklist module ( nano /etc/modprobe.d/blacklist.conf )
Bash:
blacklist snd_hda_intel
blacklist snd_hda_codec_hdmi
blacklist i915
5-Edit file /etc/modprobe.d/vfio.conf ( nano /etc/modprobe.d/vfio.conf ) and add your pci address
Code:
options vfio-pci ids=XXXX:YYYY
Replace XXXX:YYYY by your pci device address retrieved using lspci -n -s 00:02
Value 00:02 is retrieve by looking outpoot of lspci and taking value corresponding to device ( in my case : VGA compatible controller: Intel Corporation UHD Graphics 630 (Desktop) )

If you have 2 devices to passthrough ( like IGD + audio ), simply specify
Code:
options vfio-pci ids=XXXX:YYYY,AAAA:BBBB
6-Edit file /etc/modprobe.d/kvm.conf ( nano /etc/modprobe.d/kvm.conf ) and add following :
Bash:
options kvm ignore_msrs=1
7-Create folder /var/lib/vz/snippets : mkdir /var/lib/vz/snippets
8-Create hookup script and add following ( nano /var/lib/vz/snippets/gpu-hookscript.sh ) :
Bash:
#!/bin/bash

if [ $2 == "pre-start" ]
then
    echo "gpu-hookscript: unloading GPU driver for Virtual Machine $1"
    modprobe -r i915
fi
9-Make script executable : chmod +x /var/lib/vz/snippets/gpu-hookscript.sh
10-Supposing you already have your VM configured, add ( do not replace ) following to your VM config file ( nano /etc/pve/qemu-server/<id>.conf ) :
Bash:
hookscript: local:snippets/gpu-hookscript.sh
vga: none
hostpci1: 0000:00:XX,x-vga=1
Replace 0000:00:XX by your device address from lspci ( in my case : 0000:00:02,x-vga=1 )
11- Install kernel version 5.13.19-5 ( apt install pve-kernel-5.13.19-5-pve )
12- Configure boot on kernel 5.13.19-5 : ( proxmox-boot-tool kernel pin 5.13.19-5-pve )
13- Reboot : reboot

If you want to upgrade kernel to newest version later, here are the commands :
Bash:
proxmox-boot-tool kernel unpin
reboot

Credits goes to several support threads :
https://pve.proxmox.com/wiki/PCI(e)_Passthrough
https://forum.proxmox.com/threads/gpu-passthrough-issues-after-upgrade-to-7-2.109051/
https://forum.proxmox.com/threads/gpu-passthrough-on-7-2-stopped-working-since-upgrade.109514/
https://forum.proxmox.com/threads/problem-with-gpu-passthrough.55918/page-2
https://forum.proxmox.com/threads/proxmox-7-1-gpu-passthrough.100601/
https://forum.proxmox.com/threads/gpu-passthrough-not-working-bar-3.60996/#post-290145

On my installation, method of device reset indicated on other threads was causing host to crash ( code below ) :
Bash:
echo 1 > /sys/bus/pci/devices/0000\:09\:00.0/remove
echo 1 > /sys/bus/pci/rescan

For an unknown reason, module i915 was loaded despite blacklist which was preventing VM from displaying pictures. ( Visible with lsmod | grep i915 ). This is the reason for the adapted gpu-hookscript.sh

I was not able to make passthrough working with default kernel but there is a known issue with simplefb indicated on the documentation : https://pve.proxmox.com/wiki/Roadmap

Maybe kernel downgrade could be removed in the future.
i followed these directions (I am running ubuntu 22.04 and have an intel iGPU UHD 630 I am passing through) and found them very helpful and easy to follow. However when I loaded up the VM, I was getting "TASK ERROR: start failed" every time.

I eventually got the VM to boot by changing hostpci1: 0000:00:XX,x-vga=1 to:
hostpci1: 0000:00:XX,x-vga=on
I don't know if that helps anyone but I thought I would share it in case it does.

Nevermind, once i powered off my host, and powered it back up again the problem returned.
 
Last edited:
Using Proxmox 8 on an Intel NUC with N5015 iGPU here, trying to output a windows 11 VM to a HDMI monitor. I have exactly the same issue as @sozwoz, I have followed the 3os guide and the graphics device shows up perfectly in device manager. However the host monitor (hdmi) displays very little. On boot of the proxmox box it shows the grub screen and then 'loading RAM disk'. The moment exact I start the windows container the monitor goes blank and says "check signal cable".

Code:
root@nuc-proxmox:~# lspci -k
00:00.0 Host bridge: Intel Corporation Device 4e24
        Subsystem: Intel Corporation Device 3027
00:02.0 VGA compatible controller: Intel Corporation JasperLake [UHD Graphics] (rev 01)
        DeviceName: Intel(R) UHD Graphics Device
        Subsystem: Intel Corporation JasperLake [UHD Graphics]
        Kernel driver in use: vfio-pci
        Kernel modules: i915
 
Last edited:
  • Like
Reactions: networkerict
Using Proxmox 8 on an Intel NUC with N5015 iGPU here, trying to output a windows 11 VM to a HDMI monitor. I have exactly the same issue as @sozwoz, I have followed the 3os guide and the graphics device shows up perfectly in device manager. However the host monitor (hdmi) displays very little. On boot of the proxmox box it shows the grub screen and then 'loading RAM disk'. The moment exact I start the windows container the monitor goes blank and says "check signal cable".

Code:
root@nuc-proxmox:~# lspci -k
00:00.0 Host bridge: Intel Corporation Device 4e24
        Subsystem: Intel Corporation Device 3027
00:02.0 VGA compatible controller: Intel Corporation JasperLake [UHD Graphics] (rev 01)
        DeviceName: Intel(R) UHD Graphics Device
        Subsystem: Intel Corporation JasperLake [UHD Graphics]
        Kernel driver in use: vfio-pci
        Kernel modules: i915
I have exactly the same. Any update here?
 
I have exactly the same. Any update here?
Same here, same symptoms. I'm outputting from Displayport (no HDMI available) and my iGPU is 'HD Graphics 630' from 'i7-7700'. GPU passthrough is OK and I have installed drivers to Win10. But no picture.

Anyone got this working?
 
I Think we have an i issue here.
Same for me.

NO HDMI OUTPUT ON INTEL IGPU FULL PASSTHROUGH

Core i3 9100, chipset Z370 Gigabyte. Lubuntu 22.04.3 updated. FULL Passthrough working, no GVT split.
As I usually do I configure only grub options, no modprobe. These are the settings:

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction video=efifb:eek:ff video=vesa:eek:ff disable_vga=1 vfio-pci.ids=8086:3e91,8086:a2f0,1002:ab28,1002:743f vfio_iommu_type1.allow_unsafe_interrupts=1 kvm.ignore_msrs=1 modprobe.blacklist=radeon,amdgpu,nouveau,nvidia,nvidiafb,nvidia-gpu,snd_hda_intel,snd_hda_codec_hdmi,snd_soc_avs,i915"

Different trials:

1) When I select Display "Default" so I can see the output on the screen with NoVNC, everything correct: intel igpu recognized on guest and able to transcode 4k streams without affecting CPU with plex.

2) When I select Display "None" and "primary GPU" no HDMI output at all from the iGPU UHD 630!

3) Vice-Versa, keep the same settings ( Display "None" and "primary GPU") but I select the discrte GPU (radeon RX6400) GPU passthorugh works and HDMI shows off perfectly!

For sure is not a configuration problem here, I'm sure that HDMI should work as intended in case 2).

I really need such hdmi output to use the moonlight client on the main TV screen, streaming 4k games from my 7900xtx.

Anyone has finally found some solution? Not able to relate to anything...

EDIT: I've even read the romfile and up as an options, not luck...
 
Last edited:
Update, I was able to have displayport output working by adding the following options on grub:

video=simplefb:eek:ff video=vesafb:eek:ff disable_vga=1

but no hdmi still
 
Update, I was able to have displayport output working by adding the following options on grub:

video=simplefb:eek:ff video=vesafb:eek:ff disable_vga=1
video=vesafb:off does nothing on recent Proxmox versions. disable_vga=1 is a vfio-pci option and does nothing on the kernel command line. Can you try with video=simplefb:off?
but no hdmi still
Maybe get a cheap DP to HDMI converter plug?
 
video=vesafb:off does nothing on recent Proxmox versions. disable_vga=1 is a vfio-pci option and does nothing on the kernel command line. Can you try with video=simplefb:off?

Maybe get a cheap DP to HDMI converter plug?
unfortunately DP output doesn't carry audio on my board. It's useless for me.

video=simplefb:off is already present on the grub cmdline.

This is the full CMDLINE grub:

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction video=efifb:off video=vesa:off video=simplefb:off video=vesafb:off disable_vga=1 vfio-pci.ids=8086:3e91,8086:a2f0,1002:ab28,1002:743f vfio_iommu_type1.allow_unsafe_interrupts=1 kvm.ignore_msrs=1 modprobe.blacklist=radeon,amdgpu,nouveau,nvidia,nvidiafb,nvidia-gpu,snd_hda_intel,snd_hda_codec_hdmi,snd_soc_avs,i915"

this is my lspci -nnk output:

Code:
00:00.0 Host bridge [0600]: Intel Corporation 8th Gen Core 4-core Desktop Processor Host Bridge/DRAM Registers [Coffee Lake S] [8086:3e1f] (rev 08)
        DeviceName: Onboard - Other
        Subsystem: Gigabyte Technology Co., Ltd Z370 AORUS Gaming K3-CF [1458:5000]
        Kernel driver in use: skl_uncore
        Kernel modules: ie31200_edac
00:01.0 PCI bridge [0604]: Intel Corporation 6th-10th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 08)
        Subsystem: Gigabyte Technology Co., Ltd 6th-10th Gen Core Processor PCIe Controller (x16) [1458:5000]
        Kernel driver in use: pcieport
00:02.0 Display controller [0380]: Intel Corporation CoffeeLake-S GT2 [UHD Graphics 630] [8086:3e91]
        DeviceName: Onboard - Video
        Subsystem: Gigabyte Technology Co., Ltd CoffeeLake-S GT2 [UHD Graphics 630] [1458:d000]
        Kernel driver in use: vfio-pci
        Kernel modules: i915
00:08.0 System peripheral [0880]: Intel Corporation Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th/8th Gen Core Processor Gaussian Mixture Model [8086:1911]
        DeviceName: Onboard - Other
        Subsystem: Gigabyte Technology Co., Ltd Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th/8th Gen Core Processor Gaussian Mixture Model [1458:5000]
00:14.0 USB controller [0c03]: Intel Corporation 200 Series/Z370 Chipset Family USB 3.0 xHCI Controller [8086:a2af]
        DeviceName: Onboard - Other
        Subsystem: Gigabyte Technology Co., Ltd 200 Series/Z370 Chipset Family USB 3.0 xHCI Controller [1458:5007]
        Kernel driver in use: vfio-pci
        Kernel modules: xhci_pci
00:16.0 Communication controller [0780]: Intel Corporation 200 Series PCH CSME HECI #1 [8086:a2ba]
        DeviceName: Onboard - Other
        Subsystem: Gigabyte Technology Co., Ltd 200 Series PCH CSME HECI [1458:1c3a]
        Kernel driver in use: mei_me
        Kernel modules: mei_me
00:17.0 SATA controller [0106]: Intel Corporation 200 Series PCH SATA controller [AHCI mode] [8086:a282]
        DeviceName: Onboard - SATA
        Subsystem: Gigabyte Technology Co., Ltd 200 Series PCH SATA controller [AHCI mode] [1458:b005]
        Kernel driver in use: ahci
        Kernel modules: ahci
00:1b.0 PCI bridge [0604]: Intel Corporation 200 Series PCH PCI Express Root Port #17 [8086:a2e7] (rev f0)
        Subsystem: Gigabyte Technology Co., Ltd 200 Series PCH PCI Express Root Port [1458:5001]
        Kernel driver in use: pcieport
00:1b.2 PCI bridge [0604]: Intel Corporation 200 Series PCH PCI Express Root Port #19 [8086:a2e9] (rev f0)
        Subsystem: Gigabyte Technology Co., Ltd 200 Series PCH PCI Express Root Port [1458:5001]
        Kernel driver in use: pcieport
00:1b.4 PCI bridge [0604]: Intel Corporation 200 Series PCH PCI Express Root Port #21 [8086:a2eb] (rev f0)
        Subsystem: Gigabyte Technology Co., Ltd 200 Series PCH PCI Express Root Port [1458:5001]
        Kernel driver in use: pcieport
00:1c.0 PCI bridge [0604]: Intel Corporation 200 Series PCH PCI Express Root Port #1 [8086:a290] (rev f0)
        Subsystem: Gigabyte Technology Co., Ltd 200 Series PCH PCI Express Root Port [1458:5001]
        Kernel driver in use: pcieport
00:1c.5 PCI bridge [0604]: Intel Corporation 200 Series PCH PCI Express Root Port #6 [8086:a295] (rev f0)
        Subsystem: Gigabyte Technology Co., Ltd 200 Series PCH PCI Express Root Port [1458:5001]
        Kernel driver in use: pcieport
00:1c.6 PCI bridge [0604]: Intel Corporation 200 Series PCH PCI Express Root Port #7 [8086:a296] (rev f0)
        Subsystem: Gigabyte Technology Co., Ltd 200 Series PCH PCI Express Root Port [1458:5001]
        Kernel driver in use: pcieport
00:1c.7 PCI bridge [0604]: Intel Corporation 200 Series PCH PCI Express Root Port #8 [8086:a297] (rev f0)
        Subsystem: Gigabyte Technology Co., Ltd 200 Series PCH PCI Express Root Port [1458:5001]
        Kernel driver in use: pcieport
00:1d.0 PCI bridge [0604]: Intel Corporation 200 Series PCH PCI Express Root Port #9 [8086:a298] (rev f0)
        Subsystem: Gigabyte Technology Co., Ltd 200 Series PCH PCI Express Root Port [1458:5001]
        Kernel driver in use: pcieport
00:1d.4 PCI bridge [0604]: Intel Corporation 200 Series PCH PCI Express Root Port #13 [8086:a29c] (rev f0)
        Subsystem: Gigabyte Technology Co., Ltd 200 Series PCH PCI Express Root Port [1458:5001]
        Kernel driver in use: pcieport
00:1f.0 ISA bridge [0601]: Intel Corporation Z370 Chipset LPC/eSPI Controller [8086:a2c9]
        DeviceName: Onboard - Other
        Subsystem: Gigabyte Technology Co., Ltd Z370 Chipset LPC/eSPI Controller [1458:5001]
00:1f.2 Memory controller [0580]: Intel Corporation 200 Series/Z370 Chipset Family Power Management Controller [8086:a2a1]
        DeviceName: Onboard - Other
        Subsystem: Gigabyte Technology Co., Ltd 200 Series/Z370 Chipset Family Power Management Controller [1458:5001]
00:1f.3 Audio device [0403]: Intel Corporation 200 Series PCH HD Audio [8086:a2f0]
        DeviceName: Onboard - Sound
        Subsystem: Gigabyte Technology Co., Ltd 200 Series PCH HD Audio [1458:a182]
        Kernel driver in use: vfio-pci
        Kernel modules: snd_hda_intel, snd_soc_avs
00:1f.4 SMBus [0c05]: Intel Corporation 200 Series/Z370 Chipset Family SMBus Controller [8086:a2a3]
        DeviceName: Onboard - Other
        Subsystem: Gigabyte Technology Co., Ltd 200 Series/Z370 Chipset Family SMBus Controller [1458:5001]
        Kernel driver in use: i801_smbus
        Kernel modules: i2c_i801
00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (2) I219-V [8086:15b8]
        DeviceName: Onboard - Ethernet
        Subsystem: Gigabyte Technology Co., Ltd Ethernet Connection (2) I219-V [1458:e000]
        Kernel driver in use: e1000e
        Kernel modules: e1000e
01:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 XL Upstream Port of PCI Express Switch [1002:1478] (rev c7)
        Kernel driver in use: pcieport
02:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 XL Downstream Port of PCI Express Switch [1002:1479]
        Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 XL Downstream Port of PCI Express Switch [1002:1479]
        Kernel driver in use: pcieport
03:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 24 [Radeon RX 6400/6500 XT/6500M] [1002:743f] (rev c7)
        Subsystem: Sapphire Technology Limited Navi 24 [Radeon RX 6400/6500 XT/6500M] [1da2:e458]
        Kernel driver in use: vfio-pci
        Kernel modules: amdgpu
03:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 21/23 HDMI/DP Audio Controller [1002:ab28]
        Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] Navi 21/23 HDMI/DP Audio Controller [1002:ab28]
        Kernel driver in use: vfio-pci
        Kernel modules: snd_hda_intel
06:00.0 Non-Volatile memory controller [0108]: Micron/Crucial Technology P1 NVMe PCIe SSD [c0a9:2263] (rev 03)
        Subsystem: Micron/Crucial Technology P1 NVMe PCIe SSD [c0a9:2263]
        Kernel driver in use: nvme
        Kernel modules: nvme
08:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03)
        Subsystem: Gigabyte Technology Co., Ltd I211 Gigabit Network Connection [1458:e000]
        Kernel driver in use: igb
        Kernel modules: igb
09:00.0 Network controller [0280]: Intel Corporation Wireless 8265 / 8275 [8086:24fd] (rev 78)
        Subsystem: Intel Corporation Dual Band Wireless-AC 8265 [8086:1010]
        Kernel driver in use: iwlwifi
        Kernel modules: iwlwifi

and this is my modules file:

Code:
  GNU nano 7.2                                                                          /etc/modules                                                                                  
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# Parameters can be specified after the module name.

# Modules required for PCI passthrough
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

# Modules required for Intel GVT-g Split
#kvmgt

Plus I'm using the romfile from the igpu.

when I try to load also the audio device (that I still didn't understood if this device brings HDMI audio or just the classic analog codec)
00:1f.3 Audio device [0403]: Intel Corporation 200 Series PCH HD Audio [8086:a2f0]

the VM doesn't start at all, hangs at the startup.

Thanks for the help!
 
Last edited:
Same problem, passing iGPU to VM and only DP works, but not HDMI. I want to connect my box to the TV so I need sound output.

Anyone has any idea how to get the HDMI port working?

My hardware:

HP Elite Mini G600, i5 12500T Alder Lake, UHD 770. Proxmox 8.1, Ubuntu 22. Followed this guide plus a lot of other suggestions on the internet, so my configuration is a Frankenstein by now, but somehow it works.

My setup:

Code:
root@pve:~# cat /etc/kernel/cmdline

root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on iommu=pt
pcie_acs_override=downstream,multifunction initcall_blacklist=sysfb_init
video=simplefb:off video=vesafb:off video=efifb:off video=vesa:off
disable_vga=1 vfio_iommu_type1.allow_unsafe_interrupts=1 kvm.ignore_msrs=1
modprobe.blacklist=radeon,nouveau,nvidia,nvidiafb,nvidia-gpu,snd_hda_intel,
snd_hda_codec_hdmi,i915

Code:
root@pve:~# cat /etc/modules
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

Code:
root@pve:~# cat /etc/modprobe.d/blacklist.conf

blacklist nouveau
blacklist nvidia
blacklist nvidiafb
blacklist nvidia_drm
blacklist nouveau
blacklist i915

Code:
root@pve:~# cat /etc/modprobe.d/vfio.conf

options vfio-pci ids=8086:4690 disable_vga=1

1706313708692.png
 
I have an Optiplex 3050 Micro. I've battled this for ~4 working days and the furthest I've gotten is having Ubuntu display via HDMI output. Windows 10, regardless of what I do/try I get absolutely nothing despite Windows reporting the driver is installed correctly, etc.

I am positive it's just not possible in Proxmox 8.x. Something is broken here...
 
Hello,

I got same kind of issue after upgrade from proxmox 6.4 to 7.2 with intel i7 8700 ( intel uhd 630 ) and solved it with following steps :

1- Edit grub config ( nano /etc/default/grub ) with following content :
Bash:
# If you change this file, run 'update-grub' afterwards to update

# /boot/grub/grub.cfg.

# For full documentation of the options in this file, see:

# info -f grub -n 'Simple configuration'



GRUB_DEFAULT=0

GRUB_TIMEOUT=5

GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on i915.enable_gvt=1 iommu=pt video=efifb:off video=vesafb:off"

GRUB_CMDLINE_LINUX="consoleblank=10 loglevel=3"



# Uncomment to enable BadRAM filtering, modify to suit your needs

# This works with Linux (no patch required) and with any kernel that obtains

# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)

#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"



# Uncomment to disable graphical terminal (grub-pc only)

#GRUB_TERMINAL=console



# The resolution used on graphical terminal

# note that you can use only modes which your graphic card supports via VBE

# you can see them in real GRUB with the command `vbeinfo'

#GRUB_GFXMODE=640x480



# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux

#GRUB_DISABLE_LINUX_UUID=true



# Uncomment to disable generation of recovery mode menu entries

#GRUB_DISABLE_RECOVERY="true"



# Uncomment to get a beep at grub start

#GRUB_INIT_TUNE="480 440 1"
2- Run command : update-grub
3- Load modules ( nano /etc/modules )
Bash:
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.

# Modules required for PCI passthrough
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

# Modules required for Intel GVT-g Split
kvmgt
4-Blacklist module ( nano /etc/modprobe.d/blacklist.conf )
Bash:
blacklist snd_hda_intel
blacklist snd_hda_codec_hdmi
blacklist i915
5-Edit file /etc/modprobe.d/vfio.conf ( nano /etc/modprobe.d/vfio.conf ) and add your pci address
Code:
options vfio-pci ids=XXXX:YYYY
Replace XXXX:YYYY by your pci device address retrieved using lspci -n -s 00:02
Value 00:02 is retrieve by looking outpoot of lspci and taking value corresponding to device ( in my case : VGA compatible controller: Intel Corporation UHD Graphics 630 (Desktop) )

If you have 2 devices to passthrough ( like IGD + audio ), simply specify
Code:
options vfio-pci ids=XXXX:YYYY,AAAA:BBBB
6-Edit file /etc/modprobe.d/kvm.conf ( nano /etc/modprobe.d/kvm.conf ) and add following :
Bash:
options kvm ignore_msrs=1
7-Create folder /var/lib/vz/snippets : mkdir /var/lib/vz/snippets
8-Create hookup script and add following ( nano /var/lib/vz/snippets/gpu-hookscript.sh ) :
Bash:
#!/bin/bash

if [ $2 == "pre-start" ]
then
    echo "gpu-hookscript: unloading GPU driver for Virtual Machine $1"
    modprobe -r i915
fi
9-Make script executable : chmod +x /var/lib/vz/snippets/gpu-hookscript.sh
10-Supposing you already have your VM configured, add ( do not replace ) following to your VM config file ( nano /etc/pve/qemu-server/<id>.conf ) :
Bash:
hookscript: local:snippets/gpu-hookscript.sh
vga: none
hostpci1: 0000:00:XX,x-vga=1
Replace 0000:00:XX by your device address from lspci ( in my case : 0000:00:02,x-vga=1 )
11- Install kernel version 5.13.19-5 ( apt install pve-kernel-5.13.19-5-pve )
12- Configure boot on kernel 5.13.19-5 : ( proxmox-boot-tool kernel pin 5.13.19-5-pve )
13- Reboot : reboot

If you want to upgrade kernel to newest version later, here are the commands :
Bash:
proxmox-boot-tool kernel unpin
reboot

Credits goes to several support threads :
https://pve.proxmox.com/wiki/PCI(e)_Passthrough
https://forum.proxmox.com/threads/gpu-passthrough-issues-after-upgrade-to-7-2.109051/
https://forum.proxmox.com/threads/gpu-passthrough-on-7-2-stopped-working-since-upgrade.109514/
https://forum.proxmox.com/threads/problem-with-gpu-passthrough.55918/page-2
https://forum.proxmox.com/threads/proxmox-7-1-gpu-passthrough.100601/
https://forum.proxmox.com/threads/gpu-passthrough-not-working-bar-3.60996/#post-290145

On my installation, method of device reset indicated on other threads was causing host to crash ( code below ) :
Bash:
echo 1 > /sys/bus/pci/devices/0000\:09\:00.0/remove
echo 1 > /sys/bus/pci/rescan

For an unknown reason, module i915 was loaded despite blacklist which was preventing VM from displaying pictures. ( Visible with lsmod | grep i915 ). This is the reason for the adapted gpu-hookscript.sh

I was not able to make passthrough working with default kernel but there is a known issue with simplefb indicated on the documentation : https://pve.proxmox.com/wiki/Roadmap

Maybe kernel downgrade could be removed in the future.
Hello,
Long delay since last post but following new proxmox installation, thanks to chchia's post : https://forum.proxmox.com/threads/a...-working-with-hdmi-output.138049/#post-615315

It is possible to have passthrough working with last kernel version.
To achieve this :
1/ Follow first guide steps
2/ Replace grub default line with :
Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt intel_pstate=disable"
3/ Run command :
Bash:
update-grub
4/ Remove hookscript reference from your VM config :
Bash:
nano /etc/pve/qemu-server/<id>.conf
5/ Revert kernel config to use last one :
Bash:
proxmox-boot-tool kernel unpin

In case you need audio, identify line containing "Audio device" from
Bash:
lspci
and note associated address : 00:XX.Y and add it as raw PCI device from proxmox webUI ( hardware menu on your VM )
 
Thank-you -- but still no luck.

In my case, I'm on 8.1.5 -- clean install, everything updated. UEFI/ZFS -- so no GRUB for me.

vi /etc/kernel/cmdline

Code:
quiet intel_iommu=on iommu=pt intel_pstate=disable

vi /etc/modules

Code:
vfio
vfio_iommu_type1
vfio_pci

/etc/modprobe.d/pve-blacklist.conf

Code:
blacklist snd_hda_intel
blacklist snd_hda_codec_hdmi
blacklist i915

lspci -nnk | grep Intel

Code:
00:02.0 VGA compatible controller [0300]: Intel Corporation HD Graphics 530 [8086:1912] (rev 06)
00:1f.3 Audio device [0403]: Intel Corporation 200 Series PCH HD Audio [8086:a2f0]

vi /etc/modprobe.d/vfio.conf

Code:
options vfio-pci ids=8086:1912,8086:a2f0

mkdir /var/lib/vz/snippets

vi /var/lib/vz/snippets/gpu-hookscript.sh


Code:
#!/bin/bash


if [ $2 == "pre-start" ]
then
    echo "gpu-hookscript: unloading GPU driver for Virtual Machine $1"
    modprobe -r i915
fi

chmod +x /var/lib/vz/snippets/gpu-hookscript.sh

pve-efiboot-tool refresh && update-initramfs -u -k all && reboot


Installed Windows 10 IoT, ran updates, enabled RDP -- disabled "Display", added both PCI-E devices (GPU: Primary GPU, All Functions, ROM-BAR, PCI-Express), audio (same, less Primary GPU).

Added the hookscript to my device (201) -- and, still black. :-(
 
Thank-you -- but still no luck.

In my case, I'm on 8.1.5 -- clean install, everything updated. UEFI/ZFS -- so no GRUB for me.

vi /etc/kernel/cmdline

Code:
quiet intel_iommu=on iommu=pt intel_pstate=disable

vi /etc/modules

Code:
vfio
vfio_iommu_type1
vfio_pci

/etc/modprobe.d/pve-blacklist.conf

Code:
blacklist snd_hda_intel
blacklist snd_hda_codec_hdmi
blacklist i915

lspci -nnk | grep Intel

Code:
00:02.0 VGA compatible controller [0300]: Intel Corporation HD Graphics 530 [8086:1912] (rev 06)
00:1f.3 Audio device [0403]: Intel Corporation 200 Series PCH HD Audio [8086:a2f0]

vi /etc/modprobe.d/vfio.conf

Code:
options vfio-pci ids=8086:1912,8086:a2f0

mkdir /var/lib/vz/snippets

vi /var/lib/vz/snippets/gpu-hookscript.sh


Code:
#!/bin/bash


if [ $2 == "pre-start" ]
then
    echo "gpu-hookscript: unloading GPU driver for Virtual Machine $1"
    modprobe -r i915
fi

chmod +x /var/lib/vz/snippets/gpu-hookscript.sh

pve-efiboot-tool refresh && update-initramfs -u -k all && reboot


Installed Windows 10 IoT, ran updates, enabled RDP -- disabled "Display", added both PCI-E devices (GPU: Primary GPU, All Functions, ROM-BAR, PCI-Express), audio (same, less Primary GPU).

Added the hookscript to my device (201) -- and, still black. :-(
My testing are with Ubuntu guest and last proxmox 8.1 version installed on a debian ( https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye )

GPU hookscript is not needed : in the past it had to run on the host to unload i915 driver but it is not working anymore on this version ( but not needed ). I don't know if it can help but on my side, I removed default console display so display is not available through proxmox console but only on the physical screen.
 
This is the only Windows 10 Configuration gpu passthrough I got working with sound on DP and HDMI, Clean install pve 8.1.4, I hope this helps:

Using Dell Optiplex 5050 Tower and SFF
In 100.confi Add:
args: -vnc 0.0.0.0:1 -device vfio-pci,host=00:02.0,romfile=/var/lib/vz/dump/i915ovmf.rom,x-igd-opregion=on

I stored the i915ovmf.rom in /var/lib/vz/dump/

Get file from: https://github.com/patmagauran/i915ovmfPkg/releases

IE:
GNU nano 7.2 /etc/pve/qemu-server/100.conf
agent: 1
args: -vnc 0.0.0.0:1 -device vfio-pci,host=00:02.0,romfile=/var/lib/vz/dump/i915ovmf.rom,x-igd-opregion=on
balloon: 0
boot: order=scsi0;net0
cores: 4
cpu: host
hostpci0: 0000:00:1f.3,pcie=1,rombar=0
machine: pc-q35-8.1
memory: 8192
meta: creation-qemu=8.1.5,ctime=1710709984
name: Windows10Legacy
net0: virtio=BC:24:11:CA:50:85,bridge=vmbr0
numa: 0
ostype: win10
scsi0: local-lvm:vm-100-disk-0,iothread=1,size=97G
scsihw: virtio-scsi-single
smbios1: uuid=7ac1979f-2733-4bc9-8da3-144c81a25472
sockets: 1
usb0: host=1-8
usb1: host=1-6
vga: vmware
vmgenid: d29a5b37-2126-4e15-89ae-0dc213242475


This worked for me... also the Iommu Groups for Audio contains Network devices so you have to get a ethernet usb3 dongle I used a tp link

(UE300) and bridge the name of this dongle in pve networking so you dont use the internel nic. You can get a pcie nic but haven't tested for that yet. You want the Iommu groups for your devices to be in a separate group ideally, not ideal for this machine, see below. I also got it working for the RX550 card... use "args: -vnc 0.0.0.0:1 -device vfio-pci,host=01:00.0,romfile=/var/lib/vz/dump/Lexa.rom" You can omit the Tp-Link dongle for the RX550 works for both Nic devices actually, since the Audio is in a separate Iommu group. If you add a Nic pcie card it would be in group 13 in this configuration. Note if you use the internal nic with Windows 10 (no internet) you will have a communication issue with Proxmox web gui and lose SSH also. I also tested with a Nvidia gt730, -device vfio-pci,host=01:00.0,romfile=/var/lib/vz/dump/GK208.rom worked. Not displayed.


root@pve:~# for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*}; printf 'IOMMU group %s ' "$n"; lspci -nns "${d##*/}"; done
IOMMU group 0 00:02.0 Display controller [0380]: Intel Corporation HD Graphics 630 [8086:5912] (rev 04)
IOMMU group 10 01:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Baffin HDMI/DP Audio [Radeon RX 550 640SP / RX 560/560X] [1002:aae0]
IOMMU group 11 02:00.0 Non-Volatile memory controller [0108]: MAXIO Technology (Hangzhou) Ltd. NVMe SSD Controller MAP1202 [1e4b:1202] (rev 01)
IOMMU group 12 03:00.0 USB controller [0c03]: VIA Technologies, Inc. VL805/806 xHCI USB 3.0 Controller [1106:3483] (rev 01)
IOMMU group 1 00:00.0 Host bridge [0600]: Intel Corporation Xeon E3-1200 v6/7th Gen Core Processor Host Bridge/DRAM Registers [8086:591f] (rev 05)
IOMMU group 2 00:01.0 PCI bridge [0604]: Intel Corporation 6th-10th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 05)
IOMMU group 3 00:14.0 USB controller [0c03]: Intel Corporation 200 Series/Z370 Chipset Family USB 3.0 xHCI Controller [8086:a2af]
IOMMU group 3 00:14.2 Signal processing controller [1180]: Intel Corporation 200 Series PCH Thermal Subsystem [8086:a2b1]
IOMMU group 4 00:16.0 Communication controller [0780]: Intel Corporation 200 Series PCH CSME HECI #1 [8086:a2ba]
IOMMU group 4 00:16.3 Serial controller [0700]: Intel Corporation Device [8086:a2bd]
IOMMU group 5 00:17.0 SATA controller [0106]: Intel Corporation 200 Series PCH SATA controller [AHCI mode] [8086:a282]
IOMMU group 6 00:1b.0 PCI bridge [0604]: Intel Corporation 200 Series PCH PCI Express Root Port #17 [8086:a2e7] (rev f0)
IOMMU group 7 00:1c.0 PCI bridge [0604]: Intel Corporation 200 Series PCH PCI Express Root Port #7 [8086:a296] (rev f0)
IOMMU group 8 00:1f.0 ISA bridge [0601]: Intel Corporation 200 Series PCH LPC Controller (Q270) [8086:a2c6]
IOMMU group 8 00:1f.2 Memory controller [0580]: Intel Corporation 200 Series/Z370 Chipset Family Power Management Controller [8086:a2a1]
IOMMU group 8 00:1f.3 Audio device [0403]: Intel Corporation 200 Series PCH HD Audio [8086:a2f0]
IOMMU group 8 00:1f.4 SMBus [0c05]: Intel Corporation 200 Series/Z370 Chipset Family SMBus Controller [8086:a2a3]
IOMMU group 8 00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (5) I219-V [8086:15d6]
IOMMU group 9 01:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Lexa PRO [Radeon 540/540X/550/550X / RX 540X/550/550X] [1002:699f] (rev c7)


GNU nano 7.2 /etc/modprobe.d/pve-blacklist.conf
# This file contains a list of modules which are not supported by Proxmox VE
# nvidiafb see bugreport https://bugzilla.proxmox.com/show_bug.cgi?id=701
#blacklist nvidiafb
blacklist i915
blacklist nouveau
blacklist nvidia
blacklist snd_hda_codec_hdmi
blacklist snd_hda_intel
blacklist snd_hda_codec
blacklist snd_hda_core
blacklist snd_soc_avs
blacklist radeon
blacklist amdgpu

/etc/default/grub Add this:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt initcall_blacklist=sysfb_init pcie_acs_override=downstream,multifunction"

GNU nano 7.2 /etc/modprobe.d/kvm.conf
options vfio_iommu_type1 allow_unsafe_interrupts=1

GNU nano 7.2 /etc/modprobe.d/vfio.conf
options vfio-pci ids=8086:5912,1002:699f,8086:a2f0,1002:aae0 disable_vga=1


Using this above method I have MX Linux 23 KDE Wayland, Debian 12 passthrough working.

GNU nano 7.2 /etc/pve/qemu-server/101.conf
agent: 1
args: -vnc 0.0.0.0:2 -device vfio-pci,host=00:02.0,romfile=/var/lib/vz/dump/i915ovmf.rom,x-igd-opregion=on
boot: order=scsi0;ide2;net0
cores: 4
cpu: host
hostpci0: 0000:00:1f,rombar=0
ide2: local:iso/mx23-20240409_0531.iso,media=cdrom,size=6092M
memory: 8192
meta: creation-qemu=8.1.5,ctime=1712664369
name: Mx23
net0: virtio=BC:24:11:2B:44:54,bridge=vmbr0
numa: 0
ostype: l26
scsi0: local-lvm:vm-101-disk-0,iothread=1,size=60G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=8a73a30d-2f13-41f8-8a82-691d45f83038
sockets: 1
usb0: host=1a2c:2d23
usb1: host=0000:3825
vga: none
vmgenid: 5700ce95-af06-43d0-8b2b-88c056116bff

I really like the MX Snapshot feature so you can duplicate your existing installation configuration rather than installing apps over again.
 
Last edited:
  • Like
Reactions: myself379
This is the only Windows 10 Configuration gpu passthrough I got working with sound on DP and HDMI, Clean install pve 8.1.4, I hope this helps:

FWIW: I ran this step by step and bricked my whole proxmox installation. Not saying that it wont work for others, just that i can't boot mine now and as soon as the VM tries to start (automatically) the proxmox stops responding.

Edit: press `e` on boot and set `iommu=off` for GRUB_CMDLINE_LINUX_DEFAULT and it will boot, if you run into the issue.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!