GPU Passthrough with Intel HD 920

Ekinox

New Member
Mar 17, 2020
21
5
3
44
Hi All,

My goal:
Having Kodi up and running on my Proxmox server (HDMI output, HDMI sound, and hardware acceleration). Need to be able to passthrough GPU & Audio. The target would be to have it running on Debian 10 (best would be Libreelec but i was not able to make it run; i've stopped my investigations)

My Hardware:
Intel i5 7200 with Intel HD 920 integrated GPU on a Hystou mini computer.
BIOS American Trend / VT-d Enabled / VMX Enabled / AES Enabled
Note: According to the Proxmox guide test, this GPU IS NOT UEFI compatible
I've tried first my hardware with Windows 10 (alone) and Kodi. Everything ran well, with hardware acceleration OK (allowing 4K 30fps video rendering with low CPU consumption)

What i've done:
I've installed Proxmox (alone) & followed the official guide (https://pve.proxmox.com/wiki/Pci_passthrough) and many other solutions proposed around the net.
I've tried with 3 guest OS (Ubuntu, Win 10, Debian 10) and (i thing) all the possible options in Proxmox VMs (UEFI, SeaBios, q35 or not, etc...).
I've spend a lot of hours investigating, thinking previously that the issue was on the guest side with no result.

proxmox-ve: 6.1-2 (running kernel: 5.3.18-2-pve)
pve-manager: 6.1-8 (running version: 6.1-8/806edfe1)
pve-kernel-helper: 6.1-7
pve-kernel-5.3: 6.1-5
pve-kernel-5.3.18-2-pve: 5.3.18-2
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-17
libpve-guest-common-perl: 3.0-5
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-22
pve-docs: 6.1-6
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.0-10
pve-firmware: 3.0-6
pve-ha-manager: 3.0-9
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-7
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1

cat /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on video=efifb:eek:ff pcie_acs_override=downstream"

Note: "video=efifb:eek:ff" is here to avoid this error: "drm:gen8_de_irq_handler [i915]] *ERROR* Fault errors on pipe A: 0x00000080"

cat /etc/modules
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

IOMMU Group 0:
-e 00:00.0 Host bridge [0600]: Intel Corporation Xeon E3-1200 v6/7th Gen Core Processor Host Bridge/DRAM Registers [8086:5904] (rev 02)
IOMMU Group 1:
-e 00:02.0 VGA compatible controller [0300]: Intel Corporation HD Graphics 620 [8086:5916] (rev 02)
IOMMU Group 10:
-e 01:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 07)
IOMMU Group 11:
-e 02:00.0 Network controller [0280]: Broadcom Limited BCM43224 802.11a/b/g/n [14e4:4353] (rev 01)
IOMMU Group 2:
-e 00:08.0 System peripheral [0880]: Intel Corporation Skylake Gaussian Mixture Model [8086:1911]
IOMMU Group 3:
-e 00:14.0 USB controller [0c03]: Intel Corporation Sunrise Point-LP USB 3.0 xHCI Controller [8086:9d2f] (rev 21)
-e 00:14.2 Signal processing controller [1180]: Intel Corporation Sunrise Point-LP Thermal subsystem [8086:9d31] (rev 21)
IOMMU Group 4:
-e 00:16.0 Communication controller [0780]: Intel Corporation Sunrise Point-LP CSME HECI #1 [8086:9d3a] (rev 21)
IOMMU Group 5:
-e 00:17.0 SATA controller [0106]: Intel Corporation Sunrise Point-LP SATA Controller [AHCI mode] [8086:9d03] (rev 21)
IOMMU Group 6:
-e 00:1c.0 PCI bridge [0604]: Intel Corporation Sunrise Point-LP PCI Express Root Port [8086:9d12] (rev f1)
IOMMU Group 7:
-e 00:1c.3 PCI bridge [0604]: Intel Corporation Device [8086:9d13] (rev f1)
IOMMU Group 8:
-e 00:1e.0 Signal processing controller [1180]: Intel Corporation Sunrise Point-LP Serial IO UART Controller #0 [8086:9d27] (rev 21)
-e 00:1e.4 SD Host controller [0805]: Intel Corporation Device [8086:9d2b] (rev 21)
-e 00:1e.6 SD Host controller [0805]: Intel Corporation Sunrise Point-LP Secure Digital IO Controller [8086:9d2d] (rev 21)
IOMMU Group 9:
-e 00:1f.0 ISA bridge [0601]: Intel Corporation Sunrise Point-LP LPC Controller [8086:9d58] (rev 21)
-e 00:1f.2 Memory controller [0580]: Intel Corporation Sunrise Point-LP PMC [8086:9d21] (rev 21)
-e 00:1f.3 Audio device [0403]: Intel Corporation Sunrise Point-LP HD Audio [8086:9d71] (rev 21)
-e 00:1f.4 SMBus [0c05]: Intel Corporation Sunrise Point-LP SMBus [8086:9d23] (rev 21)

lspci -n -s 00:02.0
00:02.0 0300: 8086:5916 (rev 02)
lspci -n -s 00:1f.0
00:1f.0 0601: 8086:9d58 (rev 21)
lspci -n -s 00:1f.2
00:1f.2 0580: 8086:9d21 (rev 21)
lspci -n -s 00:1f.3
00:1f.3 0403: 8086:9d71 (rev 21)
lspci -n -s 00:1f.4
00:1f.4 0c05: 8086:9d23 (rev 21)

cat /etc/modprobe.d/vfio.conf
options vfio-pci ids=8086:5916,8086:9d58,8086:9d21,8086:9d71,8086:9d23

cat /etc/modprobe.d/blacklist.conf
blacklist snd_hda_intel
blacklist snd_hda_codec_hdmi
blacklist i915
blacklist snd_soc_skl
blacklist i2c_i801

Graphic card IS NOT UEFI compatible (Tpye 0, not Type 3):
root@EkiHystou:~/rom-parser# ./rom-parser /tmp/image.rom
Valid ROM signature found @0h, PCIR offset 40h
PCIR: type 0 (x86 PC-AT), vendor: 8086, device: 0406, class: 030000
PCIR: revision 3, vendor revision: 0
Last image

What i've observed:
- With Bios OVMF (UEFI):
- Ubuntu 18.04:​
Kernel 5.3:​
- Card GPU & audio seen with lspci​
- Dmesg report 1 error: "i915 0000:02:00.0: Failed to initialize GPU, declaring it wedged!"​
- Driver used is vmware and graphic card is always seen as llvmpipe in Ubuntu​
- No error on XOrg, even if i force to use Intel Driver, DRI acceleration is enabled, etc...​
- HDMI output works, HDMI sound works BUT the Intel card is not recognized and then no hardware acceleration (80% CPU for an HD movie...)​
Kernel 5.5:​
- Dmesg report 1 error: "Direct firmware load for sse with error -2". Very few documentation about it; not able to solve it​
- Debian 10:​
- Kernel 4.19:​
- Dmesg report 1 fatal error: "[drm:gen8_reset_engines [i915]] *ERROR* rcs0: reset request timeout)"​
- Need a kernel upgrade​
- Kernel 5.3:​
- Dmesg report 1 error: "i915 0000:02:00.0: Failed to initialize GPU, declaring it wedged!"​
- Driver used is intel but DRI acceleration can not be enabled. Graphic card is seen in kodi as llvmpipe​
- HDMI output works, HDMI sound works BUT the Intel card is not recognized and then no hardware acceleration​
- Windows 10:​
- Intel HD 920 is recognized; HD audio also; last drivers installed; no issue at all in the device manager (all activated, all running well, etc...)​
- No issue in the "log" (windows journal log)​
- HDMI can not be activated. Using Kodi, Intel card is recognized, acceleration is there, but HDMI is disabled....​
With Bios Seabios:
- Ubuntu 18.04:​
- When i enable GPU passthrough, the VM starts but do not correctly boot (ping KO, SSH KO)​
- Do not find anything in the log ("journalctl -o short-precise -k -b -1" do not give any info on the previous log), nothing in kern.log or syslog​
- Debian 10:​
- Same situation​
- Windows 10:​
- With a standard GPU and the GPU passthrough, i see the 2 cards in Windows, install Intel drivers, Intel 620 recognized, no issue in the device manager​
- If i delete in the VM hardware the standard GPU, and leave the GPU passthrough alone, Windows do not boot, no ping, etc...​


My idea is that i have an issue with my GPU Passthrough in Proxmox.
My host parameters are listed here; i can give the guest parameters if needed.
It's strange that i have better result with UEFI Bios while my GPU seems not to be compatible.
May you help me there or give me advice in order to continue to test and find a solution.
Thanks in advance for your time.
 
Last edited:
please also post your guest config (qm config ID)
 
agent: 1
balloon: 2048
bios: ovmf
bootdisk: scsi0
cores: 1
efidisk0: local-lvm:vm-102-disk-1,size=4M
hostpci0: 00:02,pcie=1,x-vga=1
hostpci1: 00:1f.3,pcie=1
ide2: none,media=cdrom
machine: q35
memory: 4096
name: hystUbuKodi
net0: virtio=FE:A0:0C:20:09:05,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
parent: Works
scsi0: ProxmoxOnNAS:102/vm-102-disk-0.qcow2,size=32G
scsihw: virtio-scsi-pci
shares: 2048
smbios1: uuid=bf31ad7a-c708-4e4b-9682-df61f0127fcc
sockets: 4
usb0: host=045e:0730,usb3=1
usb1: host=046d:c00e,usb3=1
vga: none
vmgenid: 6e498f1e-3823-44d1-aa10-b5478ea4f5ff
agent: 1
bootdisk: scsi0
cores: 4
hostpci0: 00:02,pcie=1,x-vga=1
hostpci1: 00:1f.3,pcie=1
ide2: none,media=cdrom
machine: q35
memory: 4096
name: hystUbuBiosTest
net0: virtio=DA:31:6D:8B:5A:04,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
parent: VNC_SSH_OK
scsi0: ProxmoxOnNAS:107/vm-107-disk-0.qcow2,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=ab44dc05-b2c1-4cbd-a7ed-5e7a1f5c534d
sockets: 1
vga: none
vmgenid: debfcdf2-1a1f-4572-a9e2-abdd28961f6e
agent: 1
bios: ovmf
boot: cdn
bootdisk: scsi0
cores: 4
efidisk0: local-lvm:vm-103-disk-1,size=4M
hostpci0: 00:1f.3,pcie=1
hostpci1: 00:02,pcie=1,x-vga=1
machine: q35
memory: 4096
name: hystDebKodi
net0: virtio=36:40:3B:0A:AA:FD,bridge=vmbr0
numa: 0
ostype: l26
parent: KokiOK_HDMIOK
scsi0: local-lvm:vm-103-disk-0,size=16G
scsi2: none,media=cdrom
scsihw: virtio-scsi-pci
smbios1: uuid=d9f9b408-5429-4524-94cf-8cd4c5bca951
sockets: 1
vga: none
vmgenid: 926b6b5f-a7b5-4735-a6bd-28e99ed602f6
agent: 1
bios: ovmf
bootdisk: scsi0
cores: 4
efidisk0: local-lvm:vm-104-disk-1,size=4M
hostpci0: 00:02,pcie=1,x-vga=1
hostpci1: 00:1f.3,pcie=1
ide0: ProxmoxOnNAS:iso/virtio-win-0.1.171.iso,media=cdrom,size=363020K
ide2: ProxmoxOnNAS:iso/Arium_LTSC3.1_1906.iso,media=cdrom
machine: q35
memory: 4096
name: hystWinKodi
net0: virtio=02:9C:E5:6C:0B:20,bridge=vmbr0
numa: 0
ostype: win10
parent: RDP_OK
scsi0: local-lvm:vm-104-disk-0,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=8bf886e7-942f-4629-82e6-61443bcaa829
sockets: 1
vga: none
vmgenid: 98179651-b647-42cf-8855-d8e743a501a4
agent: 1
bootdisk: scsi0
cores: 4
hostpci0: 00:1f.3
hostpci1: 00:02,x-vga=1
ide0: ProxmoxOnNAS:iso/virtio-win-0.1.171.iso,media=cdrom,size=363020K
ide2: ProxmoxOnNAS:iso/Arium_LTSC3.1_1906.iso,media=cdrom
machine: q35
memory: 4096
name: hystWinBiosTest
net0: virtio=1A:6C:3E:90:C0:01,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
parent: VNC_RDP_Chrome_OK
scsi0: ProxmoxOnNAS:106/vm-106-disk-0.qcow2,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=aa36b80f-d210-43c3-b3c1-cb67d4053bd6
sockets: 1
vga: none
vmgenid: 5ad74a4e-d315-48d5-8844-6023391690c9
 
mhmm... anything in the logs of the host (dmesg/journalctl/syslog) ?
 
Not that much issue during Proxmox boot:

root@EkiHystou:~# dmesg | grep error
[ 3.553141] EXT4-fs (dm-1): re-mounted. Opts: errors=remount-ro
[ 3.787035] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ 3.812611] b43: probe of bcma0:1 failed with error -524

root@EkiHystou:~# dmesg | grep failed
[ 1.252921] sdhci-pci 0000:00:1e.6: failed to setup card detect gpio
[ 3.787035] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ 3.787038] cfg80211: failed to load regulatory.db
[ 3.812611] b43: probe of bcma0:1 failed with error -524

root@EkiHystou:~# dmesg | grep intel
[ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-5.3.18-2-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on video=efifb:eek:ff
[ 0.056785] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-5.3.18-2-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on video=efifb:eek:ff
[ 0.980530] intel_idle: MWAIT substates: 0x11142120
[ 0.980530] intel_idle: v0.4.1 model 0x8E
[ 0.980755] intel_idle: lapic_timer_reliable_states 0xffffffff
[ 1.034476] intel_pstate: Intel P-state driver initializing
[ 1.034779] intel_pstate: HWP enabled
[ 1.034841] intel_pmc_core INT33A1:00: initialized
[ 1.207261] intel-lpss 0000:00:1e.0: enabling device (0000 -> 0002)
[ 3.207291] Btrfs loaded, crc32c=crc32c-intel
[ 4.999458] intel_rapl_common: Found RAPL domain package
[ 4.999459] intel_rapl_common: Found RAPL domain core
[ 4.999460] intel_rapl_common: Found RAPL domain uncore
[ 4.999461] intel_rapl_common: Found RAPL domain dram

root@EkiHystou:~# dmesg | grep 00:02.0
[ 0.277803] pci 0000:00:02.0: [8086:5916] type 00 class 0x030000
[ 0.277816] pci 0000:00:02.0: reg 0x10: [mem 0xde000000-0xdeffffff 64bit]
[ 0.277822] pci 0000:00:02.0: reg 0x18: [mem 0xc0000000-0xcfffffff 64bit pref]
[ 0.277827] pci 0000:00:02.0: reg 0x20: [io 0xf000-0xf03f]
[ 0.283788] pci 0000:02:00.0: [14e4:4353] type 00 class 0x028000
[ 0.283825] pci 0000:02:00.0: reg 0x10: [mem 0xdf000000-0xdf003fff 64bit]
[ 0.283884] pci 0000:02:00.0: enabling Extended Tags
[ 0.283968] pci 0000:02:00.0: supports D1 D2
[ 0.283969] pci 0000:02:00.0: PME# supported from D0 D3hot D3cold
[ 0.288432] pci 0000:00:02.0: vgaarb: setting as boot VGA device
[ 0.288432] pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
[ 0.288432] pci 0000:00:02.0: vgaarb: bridge control possible
[ 0.339826] pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
[ 0.967107] pci 0000:00:02.0: Adding to iommu group 1
[ 0.969255] pci 0000:02:00.0: Adding to iommu group 11
[ 1.210019] bcma-pci-bridge 0000:02:00.0: bus0: Found chip with id 43224, rev 0x01 and package 0x0A
[ 1.210043] bcma-pci-bridge 0000:02:00.0: bus0: Core 0 found: ChipCommon (manuf 0x4BF, id 0x800, rev 0x22, class 0x0)
[ 1.210063] bcma-pci-bridge 0000:02:00.0: bus0: Core 1 found: IEEE 802.11 (manuf 0x4BF, id 0x812, rev 0x17, class 0x0)
[ 1.210101] bcma-pci-bridge 0000:02:00.0: bus0: Core 2 found: PCIe (manuf 0x4BF, id 0x820, rev 0x0F, class 0x0)
[ 1.260559] bcma-pci-bridge 0000:02:00.0: bus0: Bus registered
[ 3.558640] vfio-pci 0000:00:02.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:eek:wns=io+mem

During VM boot (Bios OVMF) there are errors linked with my GPU:

[ 2285.258607] vfio-pci 0000:00:02.0: vfio_ecap_init: hiding ecap 0x1b@0x100
[ 2298.550349] DMAR: DRHD: handling fault status reg 3
[ 2298.550455] DMAR: [DMA Read] Request device [00:02.0] fault addr 8e002000 [fault reason 05] PTE Write access is not set
[ 2298.550591] DMAR: DRHD: handling fault status reg 3
[ 2298.550682] DMAR: [DMA Read] Request device [00:02.0] fault addr 8e00e000 [fault reason 05] PTE Write access is not set

During VM boot (Bios SeaBios) nothing really clear...

Here are, enclosed, the dmesg & syslog. For info, here are the time where the different VM has been started:
08:12: Proxmox boot
08:39: VM Ubuntu (OVMF)
08:39: VM Debian (OVMF)
08:59: VM Win10 (OVMF)
09:35: VM Ubuntu (SeaBios)
09:46: VM Win10 (SeaBios)
 

Attachments

  • dmesg.txt
    108.8 KB · Views: 5
  • syslog.txt
    411.8 KB · Views: 0
Last edited:
on your kernel cmdline, but do you actually boot with efi?
Not sure to understand how to do that ? On the host ? What should i do ? GRUB modification on the host ? Ask my computer to boot with UEFI (today, it,s not the case, i'm booting from AMI BIOS with standard settings). Thanks for your help.
 
Last edited:
mhmm the only thing i found that looks relevant was this: https://bugs.freedesktop.org/show_bug.cgi?id=92905
I'm not able to improve the situation with the solution proposed ("intel_iommu=on,igfx_off"):
  • If applied on the host, GPU is no more available in the IOMMU group, then passsthrough fails at VM boot ("TASK ERROR: Cannot open iommu_group: No such file or directory").
  • If applied on the guest, no error, but the error ("[DMA Read] Request device [00:02.0] fault addr 8e002000") is still there in the host syslog.
 
Not sure to understand how to do that ? On the host ?
yes i mean that you turned off the gpu for the host for efiboot but how do you actually boot the host? that would be a configuration in the host bios/efi (most of the time this is called csm or legacy boot)
 
how do you actually boot the host?
Nothing special. My motherboard has a AMI Bios and i don't have changed any setting on it (except enabling virtualization). I don't have changed anything regarding UEFI. Nothing in the host GRUB regarding UEFI neither. I'm not familiar at all with UEFI...
 
My Bios is quite recent (American Megatrends v5.12, 2019, UEFI 2.6). I've specified to boot with UEFI.
IMG_1112.JPG
Result is that i have the same error in the host Proxmox syslog when I boot:

Apr 8 12:16:35 EkiHystou kernel: [ 26.151255] DMAR: [DMA Read] Request device [00:02.0] fault addr 213339000 [fault reason 06] PTE Read access is not set
Apr 8 12:16:35 EkiHystou kernel: [ 26.184565] DMAR: DRHD: handling fault status reg 3
Apr 8 12:16:35 EkiHystou kernel: [ 26.184572] DMAR: [DMA Read] Request device [00:02.0] fault addr 213339000 [fault reason 06] PTE Read access is not set
Apr 8 12:16:35 EkiHystou kernel: [ 26.217916] DMAR: DRHD: handling fault status reg 3
Apr 8 12:16:35 EkiHystou kernel: [ 26.217922] DMAR: [DMA Read] Request device [00:02.0] fault addr 213339000 [fault reason 06] PTE Read access is not set
Apr 8 12:16:35 EkiHystou kernel: [ 26.251328] DMAR: DRHD: handling fault status reg 3
Apr 8 12:16:40 EkiHystou kernel: [ 31.184579] dmar_fault: 443 callbacks suppressed
Apr 8 12:16:40 EkiHystou kernel: [ 31.184580] DMAR: DRHD: handling fault status reg 3
Note: This erro appears with a "fresh" boot (the computer stopped, then switched on). If I do a "reboot", this error do not appear in syslog.
Note: Before (without UEFI boot), this error was showing in the host log, ONLY when I was booting a VM (with UEFI).

Reminder: According to the test proposed in the Proxmox PCI Passthrough documentation, my GPU IS NOT UEFI compatible (Type 0, not Type 3):
Bash:
root# ./rom-parser /tmp/image.rom
Valid ROM signature found @0h, PCIR offset 40h
PCIR: type 0 (x86 PC-AT), vendor: 8086, device: 0406, class: 030000
PCIR: revision 3, vendor revision: 0
Last image
 
Last edited:
Investigating, here are some news. I've found an option in my Bios (Compatibility Support Module Configuration) with ROM execution:
IMG_1118.JPG

Storage was set to UEFI / Video was set to Legacy / Other PCI was set to UEFI. I've tried to switch Video from Legacy to UEFI. Here are the results:
- On the VM OVMF (UEFI), nothing new; same errors; no change
- On the VM Seabios (which did not event boot before, without any log), there are changes:
  • They boot; both of them (Ubuntu & Win10),
  • I have a new error in the host:
vfio-pci 0000:00:02.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0xffff vfio-pci 0000:00:02.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem
  • I have the save issues like in the OVMF VMs:
    • In the host syslog: [DMA Read] Request device [00:02.0] fault addr 8e002000
    • In the guest dmesg: i915 0000:02:00.0: Failed to initialize GPU, declaring it wedged
In one hand there is improvement: I can boot Seabios VM with GPU Passthrough
On the other hand, i have introduced a new error & face the same issues as the UEFI VMs...
Don't know what should i do with it... Hope it will give you more inputs.
 
mhmmm... this all sounds a lot like the hardware is not really suited for passthrough...
maybe there is a bios update available that helps?

at this point there is not much configuration left to try (in proxmox)

edit:
also from your screenshot: i would try to disable csm
 
also from your screenshot: i would try to disable csm
Not better. VM vith UEFI have the same error messages & symptoms. For VM with Seabios, same behaviour just like when i've put CSM Video to UEFI (VM boots but ROM error message & same DMAR errors as always).
 
I notice you're using the q35 machine type for the above VMs... I've only ever been able to passthrough my Coffee Lake IGD to a headless ubuntu (for Intel QuickSync purposes) using the i440fx machine type. If I use i440fx machine type, I can use the VM headless and it sees the IGD / QuickSync, if I use the Q35 machine type, it always hangs during boot..
 
Thanks for this information. Using the i444 machine the Intel card is recognized by Ubuntu. Great improvement !
But now, i'm facing other issues with i915 Intel drivers which look quite unstable.

I've made al lot of tests (BIOS or UEFI, q35 or i440, Intel drivers or not, different Kernels) and here are the outputs:
- With host Bios set to legacy, i can't boot a SeaBios VM with GPU passthrough. Bios set to UEFI allow to boot SeaBios VM.
- With q35 machine, Intel HD 620 is never recognized by Ubuntu 18.04 or Debian 10 (drivers are here, loaded, no error in Xorg.log, DRI OK); the card stays "llvmpipe" and no acceleration available.
- With i440 machine, Intel HD 620 is recognized by Ubuntu and Debian; even if the Intel driver is installed and forced with a dedicated xorg.conf, Intel driver is not loaded (inxi command shows "vmware" or "" as driver) but redering is done through the Intel GPU, acceleration is fine. Exactly what i was looking for weeks... but... it comes with tons of i915 errors in the guest log:
1) i915 0000:00:10.0: GPU HANG: ecode 9:0:0x00000000, hang on rcs0 (freeze of the computer, except the mouse, for few seconds)​
2) i915 0000:00:10.0: Resetting rcs0 for hang on rcs0 (often lead to a complete freeze of the computer)​
3) [i915]] *ERROR* Fault errors on pipe A: 0x00000080 (thousands of lines in the syslog, logging loop without interruption, but no impact on the screen; need to exit the program (Kodi for my tests) and stop moving the mouse for stopping it)​

Best results i have:
Proxmox host booting with UEFI mode (viedo forced to UEFI with CSM parameters):
- Ubuntu 18.04 VM, Bios OVMF, i440 machine with 5.6.0 Kernel (cod/tip/drm-intel-next/2020-03-14 mainline build from https://kernel.ubuntu.com/~kernel-ppa/mainline/drm-intel-next/2020-03-14/) => Intel HD card identified, HDMI output OK, hardware acceleration OK. Errors 1) & 2) not seen. Error 3) still here. Nothing in the host log.
- Ubuntu 10.04 VM, Bios OVMF, q35 machine with default 5.3.0 Kernel => llvmpipe card identified, HDMI output OK, no hardware acceleration. Errors 1) 2) 3) not seen. Still error "[DMA Read] Request device [00:02.0] fault addr babb5000 [fault reason 06] PTE Read access is not set" in the host.

I keep on investigating. Target stays to have a stable situation on Debian 10 (better results with Kernet 5.4.0).
 
  • Like
Reactions: lixaotec
Have you tried passing the card through as a mediated device (GVT_g)? That’s how I’m currently passing my IGD through on Ubuntu 18.04 for headless Intel QuickSync.
 
  • Like
Reactions: Ekinox
Sounds promizing (no error in the host or guest). I can passthrough my GPU but, unfortunately, i'm not able to enable the HDMI output.
When i was passing through the GPU with the previous solution, HDMI was activated automatically and Ubuntu or Debian displayed on the screen (I have only 1 HDMI on my host); here, with GVT, Ubuntu or Debian is accessible via VNC but i can't display it; i still see the output of the Proxmox host on the screen.
Any idea ?

cat /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on intel_iommu=igfx_off i915.enable_gvt=1"

cat /sys/module/i915/parameters/enable_gvt
Y

cat /etc/modules
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
kvmgt
vfio_mdev

dmesg | grep -i -e iommu -e i915 -e gvt
[ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-5.3.18-2-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on intel_iommu=igfx_off i915.enable_gvt=1
[ 0.052832] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-5.3.18-2-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on intel_iommu=igfx_off i915.enable_gvt=1
[ 0.052900] DMAR: IOMMU enabled
[ 0.113609] DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1
[ 0.951045] pci 0000:00:00.0: Adding to iommu group 0
[ 0.951084] pci 0000:00:08.0: Adding to iommu group 1
[ 0.951157] pci 0000:00:14.0: Adding to iommu group 2
[ 0.951166] pci 0000:00:14.2: Adding to iommu group 2
[ 0.951205] pci 0000:00:16.0: Adding to iommu group 3
[ 0.951245] pci 0000:00:17.0: Adding to iommu group 4
[ 0.951287] pci 0000:00:1c.0: Adding to iommu group 5
[ 0.951321] pci 0000:00:1c.3: Adding to iommu group 6
[ 0.951371] pci 0000:00:1e.0: Adding to iommu group 7
[ 0.951380] pci 0000:00:1e.4: Adding to iommu group 7
[ 0.951389] pci 0000:00:1e.6: Adding to iommu group 7
[ 0.952992] pci 0000:00:1f.0: Adding to iommu group 8
[ 0.953002] pci 0000:00:1f.2: Adding to iommu group 8
[ 0.953011] pci 0000:00:1f.3: Adding to iommu group 8
[ 0.953020] pci 0000:00:1f.4: Adding to iommu group 8
[ 0.953064] pci 0000:01:00.0: Adding to iommu group 9
[ 0.953105] pci 0000:02:00.0: Adding to iommu group 10
[ 4.154617] i915 0000:00:02.0: vgaarb: deactivate vga console
[ 4.158581] i915 0000:00:02.0: Direct firmware load for i915/gvt/vid_0x8086_did_0x5916_rid_0x02.golden_hw_state failed with error -2
[ 4.178792] i915 0000:00:02.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:eek:wns=io+mem
[ 4.180130] [drm] Finished loading DMC firmware i915/kbl_dmc_ver1_04.bin (v1.4)
[ 4.375142] mei_hdcp 0000:00:16.0-b638ab7e-94e2-4ea2-a552-d1c54b627f04: bound 0000:00:02.0 (ops i915_hdcp_component_ops [i915])
[ 5.305751] [drm] Initialized i915 1.6.0 20190619 for 0000:00:02.0 on minor 0
[ 5.607343] fbcon: i915drmfb (fb0) is primary device
[ 5.658441] i915 0000:00:02.0: fb0: i915drmfb frame buffer device
[ 5.677962] i915 0000:00:02.0: MDEV: Registered

It seems that there is no display support in GVT-g. So this solution looks like a dead end for me (need to display guest on HDMI output)...

n1nj4888 may you confirm that your guest can not use a screen ?
 
Last edited:
It seems that there is no display support in GVT-g. So this solution looks like a dead end for me (need to display guest on HDMI output)...

n1nj4888 may you confirm that your guest can not use a screen ?

@Ekinox - As mentioned above, my VM (and the physical PVE host) is headless - I do not use the HDMI for the VM. I only passthrough the IGP to use the Intel QuickSync features of the CPU/IGP inside the VM and connect to the VM using console/SSH when I need to...
 
i have similar configs.. i can get win10 to use 3d accelaration, but cant with linux mint 20.04..

any clue?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!