Proxmox 5.2 Gemini Lake and IGD (graphics) passthrough for Ubuntu 18

iceplane

New Member
Sep 15, 2018
1
1
1
28
Hello,

I’m trying to setup a fresh install of Proxmox 5.2 on Gemini Lake and I would like to configure a VM with IGD (graphics) passthrough for Ubuntu 18

Computer based on ASRock J4105-ITX asrock.com/mb/Intel/J4105-ITX/

A standard install is working properly and now I would like to use HDMI output for a VM with Ubuntu 18.

I have read all these informations :
pve.proxmox.com/wiki/Pci_passthrough
forum.proxmox.com/threads/guide-intel-intergrated-graphic-passthrough.30451/
redhat.com/archives/vfio-users/2017-April/msg00032.html
forum.proxmox.com/threads/proxmox-5-0-kaby-lake-and-igd-graphics-passthrough-for-windows-10.36165/

My setup is like this:
- Fresh install Proxmox 5.2
- Grub
- vim /etc/default/grub
- change GRUB_CMDLINE_LINUX_DEFAULT line to GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on video=efifb=off,vesafb=off"
- save and quit
- update-grub

- Blacklist module
- vim /etc/modprobe.d/pve-blacklist.conf
- add these lines
blacklist snd_hda_intel
blacklist snd_hda_codec_hdmi
blacklist i915
- save and quit

- VFIO
- vim /etc/modules
- add these lines
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
- save and quit

- Vga adapter
- lspci -n -s 00:02
- lspci command display 00:02.0 0300: 8086:3185 (rev 03)
- vim /etc/modprobe.d/vfio.conf
- add this line
options vfio-pci ids=8086:3185
- save and quit

- Initramfs
- update-initramfs -u

- Create a VM (id = 100) with a Ubuntu 18 iso as primary boot

- Change the setup for the VM
- vim /etc/pve/qemu-server/100.conf
- add these lines
machine: pc-i440fx-2.2
args: -device vfio-pci,host=00:02.0,addr=0x02
vga: none
- save and quit

- Reboot the server

- Start VM 100
- Video output is initialised (clear screen) just after the VM 100 is started but the screen remains black. Start task log is:
no efidisk configured! Using temporary efivars disk.
kvm: -device vfio-pci,host=00:02.0,addr=0x02,x-igd-gms=1,x-igd-opregion=on: IGD device 0000:00:02.0 has no ROM, legacy mode disabled
TASK OK


I try to install Ubuntu before change config, but it doesnt help.

What shoud i do now?
 
  • Like
Reactions: psycmos

psycmos

New Member
Sep 26, 2018
4
0
1
33
Hi @iceplane
I have one Asrock J4105M and i want to try install windows10 and use it via in VM via HDMI desktop.

With your guide i stuck in this lines after reboot,
...
07:39:58 kernel: MODSIGN: Couldn't get UEFI db list
07:39:58 kernel: Couldn't get size: 0x800000000000000e
....
if i remove "options vfio-pci ids=8086:3185". the proxmox system boots OK.

I did your guide but after reboot the server not start anymore.
Your guide is completely?

You have graphics, graphic-passthrough working?
Best Regards
 
Last edited:

collider

New Member
Sep 29, 2018
2
0
1
34
I'm on ASRock J5005-ITX and trying to pass through the IGD to windows 10. After a similar setup and running the VM, the screen connected to the IGD became black but never showed anything. I didn't see any error log.

Is there anyone can pass through the IGP on Gemini Lake chip?
 

derMischka

New Member
Sep 29, 2018
2
0
1
38
Hi,

I have also a ASRock J4105-ITX and try to passthrough the iGPU and have also the same black screen. Beside the error
...
kernel: MODSIGN: Couldn't get UEFI db list
kernel: Couldn't get size: 0x800000000000000e
....

I can see the following lines with dmesg
...
[ 1.869667] [drm] Memory usable by graphics device = 4096M
[ 1.869670] checking generic (90000000 160000) vs hw (800000000 10000000)
[ 1.869671] [drm] Replacing VGA console driver
[ 1.869777] [drm:i915_gem_init_stolen [i915]] *ERROR* conflict detected with stolen region: [0x0000000060000000 - 0x0000000080000000]
[ 1.869830] [drm] RC6 disabled by BIOS
[ 1.884806] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[ 1.884807] [drm] Driver supports precise vblank timestamp query.
[ 1.890293] i915 0000:01:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:eek:wns=none
[ 1.893689] i915 0000:01:00.0: Direct firmware load for i915/glk_dmc_ver1_04.bin failed with error -2
[ 1.893693] i915 0000:01:00.0: Failed to load DMC firmware i915/glk_dmc_ver1_04.bin. Disabling runtime power management.
[ 1.893694] i915 0000:01:00.0: DMC firmware homepage: *h*t*t*p*s*:*//01.org/linuxgraphics/downloads/firmware
[ 1.898260] [drm] RC6 disabled, disabling runtime PM support
[ 1.898650] [drm] Initialized i915 1.6.0 20171023 for 0000:01:00.0 on minor 0
[ 1.898962] [drm] Cannot find any crtc or sizes
...
[ 3.836366] [drm] RC6 off
[ 7.880998] [drm] GPU HANG: ecode 9:0:0x4dd15de1, reason: No progress on rcs0, action: reset
[ 7.881020] i915 0000:01:00.0: Resetting rcs0 after gpu hang
[ 8.816180] random: fast init done
[ 9.096373] [drm:gen8_reset_engines [i915]] *ERROR* rcs0: reset request timeout
[ 9.096449] i915 0000:01:00.0: Resetting chip after gpu hang
[ 9.096546] random: systemd-udevd: uninitialized urandom read (16 bytes read)
[ 9.096559] random: systemd-udevd: uninitialized urandom read (16 bytes read)
[ 9.096582] random: systemd-udevd: uninitialized urandom read (16 bytes read)
...
[ 10.312122] [drm:gen8_reset_engines [i915]] *ERROR* rcs0: reset request timeout
[ 10.312153] [drm:i915_reset [i915]] *ERROR* Failed to reset chip: -5
...

I read in other forums that there is a need of a second gpu for the proxmox host. I hope that this is not true. I need no gpu for proxmox - has a really good web-Interface and ssh solve the rest the configuration.

Michael
 

PIKAMO

New Member
Sep 29, 2018
1
0
0
20
www.supportphonenumberaustralia.com
no efidisk configured! Using temporary efivars disk.
kvm: -device vfio-pci,host=00:02.0,addr=0x02,x-igd-gms=1,x-igd-opregion=on: IGD device 0000:00:02.0 has no ROM, legacy mode disabled
TASK OK
I try to install Ubuntu before change config, but it doesn't help.What should I do now? if you any technical assistance then come here -
[Acer Support Number][1]

Thanks
 

collider

New Member
Sep 29, 2018
2
0
1
34
I'm on a ASRock J5005-ITX and have the following setup:

- Fresh install Proxmox 5.2
- Grub
- vim /etc/default/grub
- change GRUB_CMDLINE_LINUX_DEFAULT line to GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on video=efifb=off,vesafb=off"
- save and quit
- update-grub

- Blacklist module
- vim /etc/modprobe.d/pve-blacklist.conf
- add these lines
blacklist snd_hda_intel
blacklist snd_hda_codec_hdmi
blacklist i915
- save and quit

- VFIO
- vim /etc/modules
- add these lines
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
- save and quit

- Vga adapter
- lspci -n -s 00:02
- lspci command display 00:02.0 0300: 8086:3185 (rev 03)
- vim /etc/modprobe.d/vfio.conf
- add this line
options vfio-pci ids=8086:3185
- save and quit

- Initramfs
- update-initramfs -u

- Create a VM (id = 100) with a Ubuntu 18 iso as primary boot

- Change the setup for the VM
- vim /etc/pve/qemu-server/100.conf
- add these lines
machine: pc-i440fx-2.2
args: -device vfio-pci,host=00:02.0,addr=0x02
vga: none
- save and quit

- Reboot the server

- Start VM 100

dmesg shows following error:

[ 482.554825] DMAR: DRHD: handling fault status reg 3
[ 482.554836] DMAR: [DMA Write] Request device [00:02.0] fault addr 0 [fault reason 02] Present bit in context entry is clear
[ 483.425246] vfio_ecap_init: 0000:00:02.0 hiding ecap 0x1b@0x100
[ 483.426689] vfio-pci 0000:00:02.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0xe486
[ 484.337918] vfio-pci 0000:00:02.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0xe486

the screen shows nothing. Anyone can help?
 

vobo70

Member
Nov 15, 2017
45
1
13
49
Warsaw, Poland
Hello,
I'm planning to buy Asrock J5005-ITX motherboard for home server.
Can anyone test if PCIe card (any - in my case it will be Intel 2 port GbE) can be passed through to VM?
I would use pfSense as router on proxmox VM.
Thank you for any answer.
 

psycmos

New Member
Sep 26, 2018
4
0
1
33
I can't get any signal with IGP in proxmox and UNRAID.
If you plan buy it for using graphics passthrough too, i think it will not work.

About passthrough PCIe card, i think you can do it without any problem.
 

n1nj4888

Member
Jan 13, 2019
110
2
18
40
Hi All,

Has anyone successfully got this working with Proxmox 5.3? I'd like to try to passthrough a Gemini Lake iGP (for hardware transcoding with Plex) to an Ubuntu VM so keen to see if anyone has got this working successfully without upgrading kernels etc?

Thanks!
 

comport

New Member
Apr 10, 2019
2
0
1
69
So are there any new hints for this problem?

I'm also on a J4105, trying to passthrough the UHD Graphics 600. The solution is probably applicable to/from all Gemini Lake IGD, like Pentium Silver and Celeron.

Currently, my IGD is shown as a VGA PCI device in the guest system and initializes the screen upon boot, but without any signal:

Failed to initialize GPU, declaring it wedged
 

psycmos

New Member
Sep 26, 2018
4
0
1
33
So are there any new hints for this problem?

I'm also on a J4105, trying to passthrough the UHD Graphics 600. The solution is probably applicable to/from all Gemini Lake IGD, like Pentium Silver and Celeron.

Currently, my IGD is shown as a VGA PCI device in the guest system and initializes the screen upon boot, but without any signal:

Failed to initialize GPU, declaring it wedged
I bought one GT 710....
 

n1nj4888

Member
Jan 13, 2019
110
2
18
40
So are there any new hints for this problem?

I'm also on a J4105, trying to passthrough the UHD Graphics 600. The solution is probably applicable to/from all Gemini Lake IGD, like Pentium Silver and Celeron.

Currently, my IGD is shown as a VGA PCI device in the guest system and initializes the screen upon boot, but without any signal:

Failed to initialize GPU, declaring it wedged
Do you need it to be the primary GPU? If not, does it work with “Default” as primary GPU and the IGP as secondary?
 

comport

New Member
Apr 10, 2019
2
0
1
69
Do you need it to be the primary GPU? If not, does it work with “Default” as primary GPU and the IGP as secondary?
No, the IGP is not detected as a second screen, although it shows up as a VGA device in lspci (same as OP).

I intend to output hardware accelerated video via HDMI. What kind of setup worked for you?
 
Last edited:

RedBass389

New Member
Apr 15, 2019
5
0
1
29
I have the Asrock J4105M and am having the same problems. As dedicated GPU, screen is black in windows 10 and Ubuntu 18 LTS. If I run it as a second GPU it says "ERROR conflict detected with stolen region" but it is clearly appearing to the ubuntu VM. We need some work around to get the intel HD 600 to pass through correctly.

I am on newest proxmox 5.4-3 and have latest bios from asrock.
 

n1nj4888

Member
Jan 13, 2019
110
2
18
40
Hi Guys,

To add to this - I managed to get my NUC7PJYH (Gemini Lake J5005) “working” with GPU Passthrough - At least for headless transcoding (Intel Quicksync usage via the IGP)...

My setup is different to yours in that I’m using Ubuntu 18.10 Desktop (P2V VM on PVE 5.4-3) and using the NUC headless - I Passthrough the GPU and it works for hardware transcoding (Intel QuickSync) once passed through.

This is my second NUC as I upgraded to a new Coffee Lake NUC which I use as my primary PVE host (again headless) and also Passthrough that GPU for hardware transcoding using Intel QuickSync. The reason I mention this Coffee Lake NUC is that I installed a working Passthrough VM on this host from scratch whereas on the NUC7PJYH (Gemini Lake) I simply clonezilla’d a working barebones Ubuntu 18.10 Desktop into a PVE VM and then updated PVE to Passthrough the GPU. The steps I took on the Coffee Lake NUC (it would likely be the same on the Gemini Lake NUC but I’d have to test) were as follows:

(A) Install PVE 5.4-3 and update via apt-get
(B) Update /etc/default/grub and /etc/modules on host as per https://pve.proxmox.com/wiki/Pci_passthrough
(C) Reboot host
(D) Create a new simple VM (UEFI) with no Passthrough (I think I used either “Default” VM Display or SPICE with 128MB) and install Ubuntu Server 18.04.2 LTS (using HWE Kernel in installer options) - The Server version of Ubuntu seems to make things simpler as it doesn’t load a GUI by default - When a GUI was involved, I was getting weird console graphics / hanging / black screens etc...
(E) Once Ubuntu Server 18.04.2 LTS (HWE Kernel) is installed and working ok, stop the VM.
(F) Change VM Display (Primary) to “Default 128MB” and add the GPU PCI Passthrough device - Mine was 00:02.0 ...
(G) Boot the VM and from the VM console command prompt, the IGP is passed through:

$ ls -l /dev/dri
total 0
drwxr-xr-x 2 root root 80 Apr 22 06:39 by-path
crw-rw----+ 1 root video 226, 0 Apr 22 14:44 card0
crw-rw----+ 1 root video 226, 128 Apr 22 06:39 renderD128

The PVE config file for this VM is as follows. CPU is currently set as “host” (this may or may not be important if the VM needs specific host CPU/IGP instructions?) ...

agent: 1
bios: ovmf
boot: dcn
bootdisk: scsi0
cores: 4
cpu: host
efidisk0: <EFI DISK DETAILS>
hostpci0: 00:02.0
ide2: none,media=cdrom
memory: 8192
name: <VM HOSTNAME>
net0: virtio=<MAC DETAILS>
numa: 0
ostype: l26
scsi0: <BOOT DISK DETAILS>
scsihw: virtio-scsi-pci
smbios1: uuid=<UUID>
sockets: 1
vga: memory=128
vmgenid: <ID>


I’m able to use QuickSync hardware transcoding in this setup. As mentioned earlier, the host/VM is headless so not sure if this will help / trigger anything with those folks experiencing HDMI display issues...
 

bolzerrr

New Member
Apr 11, 2019
6
0
1
39
Hi,

i have a Asrock J4105M and try to use the GPU as second, bit i still alwaya run into:
DMAR: DRHD: handling fault status reg 3
DMAR: [DMA Write] Request device [00:02.0] fault addr 0 [fault reason 02] Present bit in context entry is clear

Next the pc freezes and perform a reboot. Any idea what to do?

conf:
agent: 1
bios: ovmf
boot: cd
bootdisk: scsi0
cores: 4
cpu: host
cpulimit: 4
cpuunits: 1224
efidisk0: XXX
hostpci0: 00:02.0
machine: q35
memory: 4096
name: XXX
net0: XXX
numa: 1
ostype: win10
scsi0: XXX
scsi2: none,media=cdrom
scsihw: virtio-scsi-single
smbios1: XXX
sockets: 1
vcpus: 4
vga: memory=128
vmgenid: XXX
 

n1nj4888

Member
Jan 13, 2019
110
2
18
40
I see you set the machine type as Q35. I just use the default for my setup so perhaps you could take a look at changing that?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!