Proxmox 5.0 Kaby Lake and IGD (graphics) passthrough for Windows 10

YAGA

Renowned Member
Feb 15, 2016
76
8
73
58
Hello,

I’m trying to setup a fresh install of Proxmox 5.0 on Kaby Lake and I would like to configure a VM with IGD (graphics) passthrough for Windows 10 and several CT for Linux.

Computer is Zotac Xbox CI549 based on i5-7300U zotac.com/us/product/mini_pcs/ci549-nano.

It should be more or less the same story for the lastest Intel NUC generation.

A standard install is working properly and now I would like to use HDMI output for a VM with Windows 10.

I have read all these informations :
pve.proxmox.com/wiki/Pci_passthrough
forum.proxmox.com/threads/guide-intel-intergrated-graphic-passthrough.30451/
redhat.com/archives/vfio-users/2017-April/msg00032.html

My setup is like this:
- Fresh install Proxmox 5.0 community edition with the latest packages
- Boot in Legacy mode (with UEFI mode I’ve errors during the setup, for example no access to the VGA Rom)
- Grub
- vim /etc/default/grub
- change GRUB_CMDLINE_LINUX_DEFAULT line to GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on video=efifb:eek:ff,vesafb:eek:ff"
- save and quit
- update-grub

- Blacklist module
- vim /etc/modprobe.d/pve-blacklist.conf
- add these lines
blacklist snd_hda_intel
blacklist snd_hda_codec_hdmi
blacklist i915
- save and quit

- VFIO
- vim /etc/modules
- add these lines
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
- save and quit

- Vga adapter
- lspci -n -s 00:02
- lspci command display 00:02.0 0300: 8086:5916 (rev 02)
- vim /etc/modprobe.d/vfio.conf
- add this line
options vfio-pci ids=8086:5916
- save and quit

- Initramfs
- update-initramfs -u

- Create a VM (id = 100) with a Windows 10 iso as primary boot

- Change the setup for the VM
- vim /etc/pve/qemu-server/100.conf
- add these lines
machine: pc-i440fx-2.2
args: -device vfio-pci,host=00:02.0,addr=0x02
vga: none
- save and quit

- Reboot the server

- Start VM 100
- No errors, video output is initialised (clear screen) just after the VM 100 is started but the screen remains black.
- dmesg result is below :
[ 227.122914] device tap100i0 entered promiscuous mode
[ 227.129030] vmbr0: port 2(tap100i0) entered blocking state
[ 227.129032] vmbr0: port 2(tap100i0) entered disabled state
[ 227.129104] vmbr0: port 2(tap100i0) entered blocking state
[ 227.129105] vmbr0: port 2(tap100i0) entered forwarding state
[ 228.345459] vfio_ecap_init: 0000:00:02.0 hiding ecap 0x1b@0x100​

Any advices are welcome !

Is someone has a working configuration for IGD (graphics) passthrough on Kaby Lake ?

Thanks
 
Did you make sure that the IGD is in its own IOMMU group? If it isn't you may need to override the ACS as described here. Are you sure Windows installed the drivers for the IGD? I've noticed Windows to only install the drivers that are absolutely required. During installation the IGD likely wasn't exposed to the VM and thus the driver wasn't installed. I suggest you install a VNC server on the guest, log in via VNC and make sure the guest has drivers for the IGD. This also allows you to see any error messages windows might show.
 
Hello Philip,

Many thanks for you help,

> Did you make sure that the IGD is in its own IOMMU group?

Yes it should, IGD (00.02.0) is in IOMMU group 1 and I've access to this foder: /sys/kernel/iommu_groups/1/devices/0000:00:02.0
Folder content is :
-r--r--r-- 1 root 4096 Aug 7 20:57 irq
-r--r--r-- 1 root 4096 Aug 7 20:57 label
-r--r--r-- 1 root 4096 Aug 7 20:57 local_cpulist
-r--r--r-- 1 root 4096 Aug 7 20:57 local_cpus
-r--r--r-- 1 root 4096 Aug 7 20:57 modalias
-rw-r--r-- 1 root 4096 Aug 7 20:57 msi_bus
-rw-r--r-- 1 root 4096 Aug 7 20:57 numa_node
drwxr-xr-x 2 root 0 Aug 7 20:57 power
--w--w---- 1 root 4096 Aug 7 20:57 remove
--w--w---- 1 root 4096 Aug 7 20:57 rescan
--w------- 1 root 4096 Aug 7 20:57 reset
-r--r--r-- 1 root 4096 Aug 7 20:57 resource
-rw------- 1 root 16777216 Aug 7 20:57 resource0
-rw------- 1 root 268435456 Aug 7 20:57 resource2
-rw------- 1 root 268435456 Aug 7 20:57 resource2_wc
-rw------- 1 root 64 Aug 7 20:57 resource4
-r--r--r-- 1 root 4096 Aug 7 20:57 revision
-rw------- 1 root 131072 Aug 7 20:57 rom
lrwxrwxrwx 1 root 0 Aug 7 20:35 subsystem -> ../../../bus/pci
-r--r--r-- 1 root 4096 Aug 7 20:57 subsystem_device
-r--r--r-- 1 root 4096 Aug 7 20:57 subsystem_vendor
-rw-r--r-- 1 root 4096 Aug 7 20:35 uevent
-r--r--r-- 1 root 4096 Aug 7 20:35 vendor


> I've noticed Windows to only install the drivers that are absolutely required. During installation the IGD likely wasn't exposed to the VM and thus the driver wasn't installed. I suggest you install a VNC server on the guest, log in via VNC and make sure the guest has drivers for the IGD. This also allows you to see any error messages windows might show.

It might be a very good point, I haven't install Windows because I have tried to boot on the Windows 10 install CDROM (.ISO file). In my understanding Windows 10 install is compatible with mostly video drivers... May be not... you are true. I have to test that with a VNC server.

Meanwhile, I've also tested with Ubuntu 16.04 install CDROM (.ISO file) and I had the same black screen without any errors in the log files :-(

I'll do more test with VNC to install the IGD driver and I'll keep you informed.

Many thanks,

Kind regards
YAGA
 
Hello Philip,

Thanks, it works !

I've used NoMachine to install IGD driver...

I'm trying to tweak the VM to obtain the best performances with all the specific devices directly managed by Windows 10.

I'll post a "how to" because the whole setup is a little bit tricky.

Kind regards,
YAGA
 
  • Like
Reactions: Leif Julen
I'm new at this and learning, so bear with me if my problem is simple or obvious. I can't get Kaby Lake IGD to work the way I expect when running a VM on Proxmox.

YAGA, I tried your above example and a few variations based on the links you provided. I tried with Proxmox 5.1 using my Kaby Lake E3-1245v6 and have a working VM of Windows 10 v1709 booting UEFI (OVMF) with the Intel IGD driver now installed. However, I still can't get video out to a monitor from any one of the physical VGA/DVI/HDMI/DisplayPort sockets on the motherboard. My motherboard is a MSI C236A with 32GB ECC.

The VNC recommendation was indeed the only way I could get remote access to desktop until Intel's IGD driver was installed. Is the whole point of IGD passthrough just to give the VM access to GPU acceleration (but still using remote desktop), or are people getting their Windows VM (or any other OS VM) to directly output to a physical monitor via VGA/DVI/HDMI/DisplayPort?

00:02.0 VGA compatible controller: Intel Corporation HD Graphics P630 (rev 04) (prog-if 00 [VGA controller])
Subsystem: Micro-Star International Co., Ltd. [MSI] Device 7998
Flags: bus master, fast devsel, latency 0, IRQ 340
Memory at db000000 (64-bit, non-prefetchable) [size=16M]
Memory at 50000000 (64-bit, prefetchable) [size=256M]
I/O ports at f000
[virtual] Expansion ROM at 000c0000 [disabled] [size=128K]

Capabilities: [40] Vendor Specific Information: Len=0c <?>
Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00
Capabilities: [ac] MSI: Enable+ Count=1/1 Maskable- 64bit-
Capabilities: [d0] Power Management version 2
Capabilities: [100] Process Address Space ID (PASID)
Capabilities: [200] Address Translation Service (ATS)
Capabilities: [300] Page Request Interface (PRI)
Kernel driver in use: vfio-pci
Kernel modules: i915

/etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on video=efifb:eek:ff,vesafb:eek:ff"

update-grub

/etc/modprobe.d/pve-blacklist.conf
blacklist snd_hda_intel
blacklist snd_hda_codec_hdmi
blacklist i915

/etc/modules
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

/etc/modprobe.d/vfio.conf
options vfio-pci ids=8086:591d disable_vga=1

update-initramfs -u

/etc/pve/qemu-server/[vmid].conf
bios: ovmf
bootdisk: scsi0
cores: 4
cpu: host
efidisk0: local-zfs:vm-######-disk-2,size=128K
machine: q35
memory: 8192
name: Win10Pro64
net0: virtio=########,bridge=vmbr0
numa: 0
ostype: win10
parent: Fresh_Install
scsi0: local-zfs:vm-[vmid]-disk-1,cache=writeback,discard=on,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=################################
sockets: 1
vga: none
args: -device vfio-pci,host=00:02.0,addr=0x02

Thank you in advance!!
 
Last edited:
Hi Leif,

In my setup, I'm using Windows VM to directly output to a physical monitor.

I'll try to help you but I don't have a MSI C236A.

Did you try with UEFI or standard bios setup ? (I've got a different result between UEFI and standard bios setup)

First, please could send me the output of this command: lspci -nn

Regards,
YAGA
 
Last edited:
Thanks YAGA,

I started a new thread where I provided more details, including the output from "lspci -nn -s 00:02.0" here: forum.proxmox.com/threads/kaby-lake-igd-passthrough-on-proxmox-5-1-with-win10-vm.38125/

Here is my output from "lspci -nn"
00:00.0 Host bridge [0600]: Intel Corporation Device [8086:5918] (rev 05)
00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 05)
00:01.1 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x8) [8086:1905] (rev 05)
00:02.0 VGA compatible controller [0300]: Intel Corporation HD Graphics P630 [8086:591d] (rev 04)
00:08.0 System peripheral [0880]: Intel Corporation Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th Gen Core Processor Gaussian Mixture Model [8086:1911]
00:14.0 USB controller [0c03]: Intel Corporation Sunrise Point-H USB 3.0 xHCI Controller [8086:a12f] (rev 31)
00:14.2 Signal processing controller [1180]: Intel Corporation Sunrise Point-H Thermal subsystem [8086:a131] (rev 31)
00:15.0 Signal processing controller [1180]: Intel Corporation Sunrise Point-H Serial IO I2C Controller #0 [8086:a160] (rev 31)
00:15.1 Signal processing controller [1180]: Intel Corporation Sunrise Point-H Serial IO I2C Controller #1 [8086:a161] (rev 31)
00:16.0 Communication controller [0780]: Intel Corporation Sunrise Point-H CSME HECI #1 [8086:a13a] (rev 31)
00:17.0 SATA controller [0106]: Intel Corporation Sunrise Point-H SATA controller [AHCI mode] [8086:a102] (rev 31)
00:1c.0 PCI bridge [0604]: Intel Corporation Sunrise Point-H PCI Express Root Port #1 [8086:a110] (rev f1)
00:1c.4 PCI bridge [0604]: Intel Corporation Sunrise Point-H PCI Express Root Port #5 [8086:a114] (rev f1)
00:1e.0 Signal processing controller [1180]: Intel Corporation Sunrise Point-H Serial IO UART #0 [8086:a127] (rev 31)
00:1f.0 ISA bridge [0601]: Intel Corporation Sunrise Point-H LPC Controller [8086:a149] (rev 31)
00:1f.2 Memory controller [0580]: Intel Corporation Sunrise Point-H PMC [8086:a121] (rev 31)
00:1f.3 Audio device [0403]: Intel Corporation Sunrise Point-H HD Audio [8086:a170] (rev 31)
00:1f.4 SMBus [0c05]: Intel Corporation Sunrise Point-H SMBus [8086:a123] (rev 31)
00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (2) I219-V [8086:15b8] (rev 31)
02:00.0 Serial Attached SCSI controller [0107]: LSI Logic / Symbios Logic SAS3416 Fusion-MPT Tri-Mode I/O Controller Chip (IOC) [1000:00ac] (rev 01)
03:00.0 USB controller [0c03]: ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller [1b21:1242]
04:00.0 PCI bridge [0604]: Integrated Device Technology, Inc. [IDT] PES12N3A PCI Express Switch [111d:8018] (rev 0e)
05:02.0 PCI bridge [0604]: Integrated Device Technology, Inc. [IDT] PES12N3A PCI Express Switch [111d:8018] (rev 0e)
05:04.0 PCI bridge [0604]: Integrated Device Technology, Inc. [IDT] PES12N3A PCI Express Switch [111d:8018] (rev 0e)
06:00.0 Ethernet controller [0200]: Intel Corporation 82576 Gigabit Network Connection [8086:10e8] (rev 01)
06:00.1 Ethernet controller [0200]: Intel Corporation 82576 Gigabit Network Connection [8086:10e8] (rev 01)
07:00.0 Ethernet controller [0200]: Intel Corporation 82576 Gigabit Network Connection [8086:10e8] (rev 01)
07:00.1 Ethernet controller [0200]: Intel Corporation 82576 Gigabit Network Connection [8086:10e8] (rev 01)

The Proxmox 5.1 host is using legacy BIOS, the Windows 10 guest VM is using UEFI (OVMF). Windows starts and has been updated. I used VNC to install Intel IGD drivers. The Windows device manager shows the Intel IGD driver as working, but only "Generic Non-PnP Monitor" is available for output.

I appreciate any advice you can give!
 

Attachments

  • Win10_Device_Manager.jpg
    Win10_Device_Manager.jpg
    62.4 KB · Views: 60
In the interest of helping out those who haven't got it working yet I'll necro this topic...

Here's my config for a Windows 10 VM with UPT mode IGD passthrough:

args: -cpu host,+fpu,+vme,+de,+pse,+tsc,+msr,+pae,+mce,+cx8,+apic,+sep,+mtrr,+pge,+mca,+cmov,+pat,+pse36,+clflush,+acpi,+mmx,+fxsr,+sse,+sse2,+ss,+ht,+tm,+pbe,+syscall,+nx,+pdpe1gb,+rdtscp,+lm,+pni,+pclmulqdq,+dtes64,+monitor,+ds_cpl,+vmx,+smx,+est,+tm2,+ssse3,+fma,+cx16,+xtpr,+pdcm,+pcid,+sse4_1,+sse4_2,+x2apic,+movbe,+popcnt,+aes,+xsave,+avx,+f16c,+rdrand,+lahf_lm,+abm,+3dnowprefetch,+fsgsbase,+tsc_adjust,+bmi1,+hle,+avx2,+smep,+bmi2,+erms,+invpcid,+rtm,+mpx,+rdseed,+adx,+smap,+clflushopt,+xsaveopt,+xsavec,+xgetbv1,+xsaves,+arat -smp 4,cores=2,threads=2 -device vfio-pci,host=00:02.0,addr=0x18,x-igd-gms=1,x-igd-opregion=on
boot: cdn
bootdisk: virtio0
cores: 4
cpu: host
machine: q35
memory: 8192
numa: 0
ostype: win10
scsihw: virtio-scsi-pci
sockets: 1
vga: std

Just followed the regular guides around and it worked for UPT, no legacy mode though. The hardware underneath is a Core i7 7700 on a Gigabyte H270N-WIFI. I'm currently trying to make this work in a macOS VM.
 
great guide. thanks for the help. got this working on an Intel NUC i5 7th gen. had to enable RDP in windows VM and then install the Intel driver, also disable the generic monitor driver and then screen worked!! now trying to figure out how to get HDMI audio!
 
hi ther im new to all this I live proxmox to much i have it working on nuc i3 6100u with 12gb of ram and can´t enable this i tried all things in the post and allways gettion 1 error!


Code:
kvm: -device vmgenid,guid=a1e01acc-4155-4c7c-aaa1-3aff146d36df: vmgenid requires DMA write support in fw_cfg, which this machine type does not provide
TASK ERROR: start failed: QEMU exited with code 1
 
hi ther im new to all this I live proxmox to much i have it working on nuc i3 6100u with 12gb of ram and can´t enable this i tried all things in the post and allways gettion 1 error!


Code:
kvm: -device vmgenid,guid=a1e01acc-4155-4c7c-aaa1-3aff146d36df: vmgenid requires DMA write support in fw_cfg, which this machine type does not provide
TASK ERROR: start failed: QEMU exited with code 1
it seems your error hat nothing to do with the original problem, please open a new thread for this
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!