[SOLVED] Proxmox 6.0 Gemini Lake and IGD (graphics) passthrough for Windows 10

Sferg

Member
Aug 10, 2019
14
2
8
39
Hello. I am a junior in virtualization. I'm trying passthrough an integrated video card inside a VM, but, unfortunately, when I start the VM, I see only blank screen (connection via HDMI).

My PC configuration:
- Mainboard: ASRock J5005-ITX;
- CPU: Intel Pentium J5005;
- RAM: 2 x 8Gb DDR4-2400;
- GPU: Intel UHD 605.

I did everything according to the instructions, which is given in this forum:

- run lspci -nn | grep "VGA":
Code:
00:02.0 VGA compatible controller [0300]: Intel Corporation UHD Graphics 605 [8086:3184] (rev 03)
- Add "intel_iommu=on video=efifb:off" to GRUB_CMDLINE_LINUX_DEFAULT;
- run update-grub;
- Add to /etc/modprobe.d/blacklist.conf:
Code:
blacklist snd_hda_intel
blacklist snd_hda_codec_hdmi
blacklist i915
- Add to /etc/modprobe.d/kvm.conf:
Code:
options kvm ignore_msrs=1
- Add to /etc/modprobe.d/vfio.conf:
Code:
options vfio-pci ids=8086:3184 disable_vga=1
- Add to /etc/modules:
Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
- run update-initramfs -u;
- reboot.

The VM configuration is as below:

Code:
agent: 1
args: -device vfio-pci,host=00:02.0,addr=0x18,x-igd-opregion=on
balloon: 0
bios: ovmf
boot: dc
bootdisk: sata0
cores: 2
cpu: host
efidisk0: local:100/vm-100-disk-1.qcow2,size=128K
machine: q35
memory: 2048
name: Win10
net0: virtio=DA:52:A4:8B:C5:6F,bridge=vmbr0,firewall=1
numa: 0
ostype: win10
sata0: local:iso/windows.iso,media=cdrom
sata1: local:100/vm-100-disk-0.qcow2,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=a02656db-e97c-4c2c-a4ff-955328ad71fc
sockets: 1
vga: none
vmgenid: 26990a9f-5645-47c4-99f6-1fa87481c708

Tell me, please, what am I doing wrong?
 
  • Like
Reactions: gnahz
Code:
# dmesg | grep -aiE '((DMAR)|(kvm)|(drm)|(Command line)|(iommu)|(vfio))'
[    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-5.0.21-5-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on iommu=pt rd.driver.pre=vfio-pci video=vesafb:off,efifb:off
[    0.012787] ACPI: DMAR 0x000000005D6A0D70 0000A8 (v01 INTEL  GLK-SOC  00000003 BRXT 0100000D)
[    0.229304] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-5.0.21-5-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on iommu=pt rd.driver.pre=vfio-pci video=vesafb:off,efifb:off
[    0.229434] DMAR: IOMMU enabled
[    0.363464] DMAR: Host address width 39
[    0.363466] DMAR: DRHD base: 0x000000fed64000 flags: 0x0
[    0.363475] DMAR: dmar0: reg_base_addr fed64000 ver 1:0 cap 1c0000c40660462 ecap 9e2ff0505e
[    0.363478] DMAR: DRHD base: 0x000000fed65000 flags: 0x1
[    0.363487] DMAR: dmar1: reg_base_addr fed65000 ver 1:0 cap d2008c40660462 ecap f050da
[    0.363490] DMAR: RMRR base: 0x0000005d5d8000 end: 0x0000005d5f7fff
[    0.363492] DMAR: RMRR base: 0x0000005f800000 end: 0x0000007fffffff
[    0.363496] DMAR-IR: IOAPIC id 1 under DRHD base  0xfed65000 IOMMU 1
[    0.363497] DMAR-IR: HPET id 0 under DRHD base 0xfed65000
[    0.363499] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.365430] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    0.920596] DMAR: No ATSR found
[    0.920667] DMAR: dmar0: Using Queued invalidation
[    0.920672] DMAR: dmar1: Using Queued invalidation
[    0.920827] DMAR: Hardware identity mapping for device 0000:00:00.0
[    0.920830] DMAR: Hardware identity mapping for device 0000:00:00.1
[    0.920838] DMAR: Hardware identity mapping for device 0000:00:02.0
[    0.920841] DMAR: Hardware identity mapping for device 0000:00:0e.0
[    0.920843] DMAR: Hardware identity mapping for device 0000:00:0f.0
[    0.920846] DMAR: Hardware identity mapping for device 0000:00:12.0
[    0.920848] DMAR: Hardware identity mapping for device 0000:00:13.0
[    0.920850] DMAR: Hardware identity mapping for device 0000:00:13.1
[    0.920852] DMAR: Hardware identity mapping for device 0000:00:13.2
[    0.920854] DMAR: Hardware identity mapping for device 0000:00:13.3
[    0.920856] DMAR: Hardware identity mapping for device 0000:00:15.0
[    0.920859] DMAR: Hardware identity mapping for device 0000:00:1f.0
[    0.920861] DMAR: Hardware identity mapping for device 0000:00:1f.1
[    0.920865] DMAR: Hardware identity mapping for device 0000:01:00.0
[    0.920868] DMAR: Hardware identity mapping for device 0000:03:00.0
[    0.920872] DMAR: Hardware identity mapping for device 0000:04:00.0
[    0.920873] DMAR: Setting RMRR:
[    0.920876] DMAR: Ignoring identity map for HW passthrough device 0000:00:02.0 [0x5f800000 - 0x7fffffff]
[    0.920877] DMAR: Ignoring identity map for HW passthrough device 0000:00:15.0 [0x5d5d8000 - 0x5d5f7fff]
[    0.920880] DMAR: Prepare 0-16MiB unity mapping for LPC
[    0.920881] DMAR: Ignoring identity map for HW passthrough device 0000:00:1f.0 [0x0 - 0xffffff]
[    0.920928] DMAR: Intel(R) Virtualization Technology for Directed I/O
[    0.921037] iommu: Adding device 0000:00:00.0 to group 0
[    0.921049] iommu: Adding device 0000:00:00.1 to group 0
[    0.921064] iommu: Adding device 0000:00:02.0 to group 1
[    0.921077] iommu: Adding device 0000:00:0e.0 to group 2
[    0.921102] iommu: Adding device 0000:00:0f.0 to group 3
[    0.921116] iommu: Adding device 0000:00:12.0 to group 4
[    0.921135] iommu: Adding device 0000:00:13.0 to group 5
[    0.921156] iommu: Adding device 0000:00:13.1 to group 6
[    0.921173] iommu: Adding device 0000:00:13.2 to group 7
[    0.921192] iommu: Adding device 0000:00:13.3 to group 8
[    0.921209] iommu: Adding device 0000:00:15.0 to group 9
[    0.921234] iommu: Adding device 0000:00:1f.0 to group 10
[    0.921247] iommu: Adding device 0000:00:1f.1 to group 10
[    0.921264] iommu: Adding device 0000:01:00.0 to group 11
[    0.921282] iommu: Adding device 0000:03:00.0 to group 12
[    0.921302] iommu: Adding device 0000:04:00.0 to group 13
[    7.600722] VFIO - User Level meta-driver version: 0.3
[    7.777799] vfio-pci 0000:00:02.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem
[    7.795536] vfio_pci: add [8086:3184[ffffffff:ffffffff]] class 0x000000/00000000
[   11.695507] DMAR: 32bit 0000:01:00.0 uses non-identity mapping
[  497.554248] DMAR: DRHD: handling fault status reg 2
[  497.554313] DMAR: [DMA Write] Request device [00:02.0] fault addr 0 [fault reason 02] Present bit in context entry is clear
[  498.043526] vfio_ecap_init: 0000:00:02.0 hiding ecap 0x1b@0x100
[  498.045564] vfio-pci 0000:00:02.0: BAR 2: can't reserve [mem 0x90000000-0x9fffffff 64bit pref]
[  498.936803] vfio-pci 0000:00:02.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0x3df4
[  508.064011] vfio-pci 0000:00:02.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem
 
What happens if you remove the args: -device vfio-pci,host=00:02.0,addr=0x18,x-igd-opregion=on" from the VM conf and simply pass the GPU through using the webgui (which results in the line hostpci0: 00:02.0 being added to the config)?

This is the way i do it for a VM with IGP passthrough but I only use the passthrough for Intel QuickSync, and the VM is headless...
 
What happens if you remove the args: -device vfio-pci,host=00:02.0,addr=0x18,x-igd-opregion=on" from the VM conf and simply pass the GPU through using the webgui (which results in the line hostpci0: 00:02.0 being added to the config)?

This is the way i do it for a VM with IGP passthrough but I only use the passthrough for Intel QuickSync, and the VM is headless...
I deleted the line:
Code:
args: -device vfio-pci, host = 00: 02.0, addr = 0x18, x-igd-opregion = on
And instead she wrote:
Code:
hostpci0: 00:02

The VM configuration is as below:
Code:
agent: 1
balloon: 0
bios: ovmf
boot: dc
bootdisk: sata0
cores: 2
cpu: host
efidisk0: local:100/vm-100-disk-1.qcow2,size=128K
hostpci0: 00:02
machine: q35
memory: 2048
name: Win10
net0: virtio=DA:52:A4:8B:C5:6F,bridge=vmbr0,firewall=1
numa: 0
ostype: win10
sata0: local:iso/windows.iso,media=cdrom,size=4233388K
sata1: local:100/vm-100-disk-0.qcow2,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=a02656db-e97c-4c2c-a4ff-955328ad71fc
sockets: 1
vga: none
vmgenid: 26990a9f-5645-47c4-99f6-1fa87481c708

When the VM starts, the signal on the monitor that is connected via HDMI disappears.

When VM starts, the following is written to the log:
Code:
[ 2214.585091] DMAR: DRHD: handling fault status reg 2
[ 2214.585101] DMAR: [DMA Write] Request device [00:02.0] fault addr 0 [fault reason 02] Present bit in context entry is clear
[ 2215.074480] vfio_ecap_init: 0000:00:02.0 hiding ecap 0x1b@0x100
[ 2215.853396] vfio-pci 0000:00:02.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0xd794

When VM stops, the following is written to the log:
Code:
[ 2469.367179] DMAR: DRHD: handling fault status reg 2
[ 2469.367189] DMAR: [DMA Write] Request device [00:02.0] fault addr 0 [fault reason 02] Present bit in context entry is clear
 
Hello,

I am also trying to get intel graphics 4600 passed to a VM following all the manuals I found, but unsuccessfully I am not able to make it work on a windows10, withouth having windows giving me blue error screen when installing the driver. The only way I was able to do it was on windows8.1, and not very stable to be honest.

Were you able to achieve it yourself?

The only way I was able to do it on windows10 was with UNRAID

Kind regards
 
What happens if you remove the args: -device vfio-pci,host=00:02.0,addr=0x18,x-igd-opregion=on" from the VM conf and simply pass the GPU through using the webgui (which results in the line hostpci0: 00:02.0 being added to the config)?

This is the way i do it for a VM with IGP passthrough but I only use the passthrough for Intel QuickSync, and the VM is headless...

Can you please provide some feedback about how you achieved it to make it work? PVE version, intel graphics version, etc?

Kind regards
 
What happens if you remove the args: -device vfio-pci,host=00:02.0,addr=0x18,x-igd-opregion=on" from the VM conf and simply pass the GPU through using the webgui (which results in the line hostpci0: 00:02.0 being added to the config)?

This is the way i do it for a VM with IGP passthrough but I only use the passthrough for Intel QuickSync, and the VM is headless...

I'm with Bestbeast; any chance you can write out how you made it work? I've been trying to passthough my Intel QuickSync to my VM as well.
Any help would be much appreciated.
 
Hello.

I finally got an image on the screen when starting the virtual machine. For IGD passthrough, VBIOS is required.

Extract the j5005_vbios.rom file from the archive in the attachment and copy it /usr/share/kvm.
In the file /etc/pve/qemu-server/100.conf, write the following:
Code:
args: -device vfio-pci,host=00:02.0,addr=0x02,x-igd-gms=1,romfile=intel_uhd_605_vbios.rom
vga: none
 

Attachments

Last edited:
Hello,

I tried that one, and it is not working for me, well, the VM starts as normal, but igpu returns code 43, so I guess it is not working.
Though I do not have intel 605 graphics, I have 4600 one. Do you know where can I obtain vbios for it?

Kind regards
 
Bestbeast,

1. Please write your mainboard model.
2. Please write the result of the command:
Code:
lspci -nn | grep "VGA"
 
Last edited:
Bestbeast,

1. Please write your mainboard model.
2. Please write the result of the command:
Code:
lspci -nn | grep "VGA"
Hello,

@Sferg my motherboard is a Asus Z97K and here is the output of lspci:

root@proxmox:~# lspci -nn | grep "VGA"
00:02.0 VGA compatible controller [0300]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller [8086:0412] (rev 06)

Kind regards
 
Bestbeast, please make sure the following lines are in the file /etc/pve/qemu-server/<VMID>.conf:
Code:
args: -device vfio-pci,host=00:02.0,addr=0x02,romfile=vBIOS_HD4600.rom
bios: seabios
vga: none

Note: with OVMF (UEFI), this vBIOS ROM does not work!
 
Last edited:
Bestbeast, please make sure the following lines are in the file /etc/pve/qemu-server/<VMID>.conf:
Code:
args: -device vfio-pci,host=00:02.0,addr=0x02,romfile=vBIOS_HD4600.rom
bios: seabios
vga: none

Note: with OVMF (UEFI), this vBIOS ROM does not work!
Ah oki, nice to know, as I was trying with EUFI xD
Will let you know once I try it.

Out of curiosity, where did you find the rom for my graphics? I tried searching on the internet, and dumping it with gpuz, but no luck on doing it :(
 
Bestbeast, please make sure the following lines are in the file /etc/pve/qemu-server/<VMID>.conf:
Code:
args: -device vfio-pci,host=00:02.0,addr=0x02,romfile=vBIOS_HD4600.rom
bios: seabios
vga: none

Note: with OVMF (UEFI), this vBIOS ROM does not work!
Hello @Sferg

VM config:

Code:
args: -device vfio-pci,host=00:02.0,addr=0x02,romfile=vbios_hd4600.rom
bios: seabios
bootdisk: sata0
cores: 4
ide2: local:iso/Windows.iso,media=cdrom
memory: 5000
name: windows2
net0: e1000=22:53:D0:30:BF:88,bridge=vmbr0,firewall=1
numa: 0
ostype: win10
sata0: local-lvm:vm-201-disk-0,size=100G
scsihw: virtio-scsi-pci
smbios1: uuid=4a7e6b81-2f5b-4ef8-8700-273e0e53d2a6
sockets: 1
vga: none
vmgenid: e0354641-3849-47e9-be8c-ca98eaea2593

And getting this output when starting VM:

kvm: -device vfio-pci,host=00:02.0,addr=0x02,romfile=vbios_hd4600.rom: IGD device 0000:00:02.0 cannot support legacy mode due to existing devices at address 1f.0
TASK OK

Though I can not see any output on my screen

Kind regards
 
Bestbeast said:
And getting this output when starting VM:

kvm: -device vfio-pci,host=00:02.0,addr=0x02,romfile=vbios_hd4600.rom: IGD device 0000:00:02.0 cannot support legacy mode due to existing devices at address 1f.0
TASK OK

Try adding the line to your VM configuration file:
Code:
machine: q35

Try changing the line
Code:
args: -device vfio-pci,host=00:02.0,addr=0x02,x-igd-gms=1,x-igd-opregion=on,romfile=vbios_hd4600.rom

Try changing the line in the file /etc/default/grub:
Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream video=efifb:off"
Then update-grub and reboot.
 
Last edited:
I think my sound controller, is in the same IOMMU group, do I need to pass it through also, for this to work correctly?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!