Proxmox 5.2 Gemini Lake and IGD (graphics) passthrough for Ubuntu 18

Discussion in 'Proxmox VE: Installation and configuration' started by iceplane, Sep 15, 2018.

  1. iceplane

    iceplane New Member

    Joined:
    Sep 15, 2018
    Messages:
    1
    Likes Received:
    1
    Hello,

    I’m trying to setup a fresh install of Proxmox 5.2 on Gemini Lake and I would like to configure a VM with IGD (graphics) passthrough for Ubuntu 18

    Computer based on ASRock J4105-ITX asrock.com/mb/Intel/J4105-ITX/

    A standard install is working properly and now I would like to use HDMI output for a VM with Ubuntu 18.

    I have read all these informations :
    pve.proxmox.com/wiki/Pci_passthrough
    forum.proxmox.com/threads/guide-intel-intergrated-graphic-passthrough.30451/
    redhat.com/archives/vfio-users/2017-April/msg00032.html
    forum.proxmox.com/threads/proxmox-5-0-kaby-lake-and-igd-graphics-passthrough-for-windows-10.36165/

    My setup is like this:
    - Fresh install Proxmox 5.2
    - Grub
    - vim /etc/default/grub
    - change GRUB_CMDLINE_LINUX_DEFAULT line to GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on video=efifb=off,vesafb=off"
    - save and quit
    - update-grub

    - Blacklist module
    - vim /etc/modprobe.d/pve-blacklist.conf
    - add these lines
    blacklist snd_hda_intel
    blacklist snd_hda_codec_hdmi
    blacklist i915
    - save and quit

    - VFIO
    - vim /etc/modules
    - add these lines
    vfio
    vfio_iommu_type1
    vfio_pci
    vfio_virqfd
    - save and quit

    - Vga adapter
    - lspci -n -s 00:02
    - lspci command display 00:02.0 0300: 8086:3185 (rev 03)
    - vim /etc/modprobe.d/vfio.conf
    - add this line
    options vfio-pci ids=8086:3185
    - save and quit

    - Initramfs
    - update-initramfs -u

    - Create a VM (id = 100) with a Ubuntu 18 iso as primary boot

    - Change the setup for the VM
    - vim /etc/pve/qemu-server/100.conf
    - add these lines
    machine: pc-i440fx-2.2
    args: -device vfio-pci,host=00:02.0,addr=0x02
    vga: none
    - save and quit

    - Reboot the server

    - Start VM 100
    - Video output is initialised (clear screen) just after the VM 100 is started but the screen remains black. Start task log is:
    no efidisk configured! Using temporary efivars disk.
    kvm: -device vfio-pci,host=00:02.0,addr=0x02,x-igd-gms=1,x-igd-opregion=on: IGD device 0000:00:02.0 has no ROM, legacy mode disabled
    TASK OK


    I try to install Ubuntu before change config, but it doesnt help.

    What shoud i do now?
     
    psycmos likes this.
  2. psycmos

    psycmos New Member

    Joined:
    Sep 26, 2018
    Messages:
    4
    Likes Received:
    0
    Hi @iceplane
    I have one Asrock J4105M and i want to try install windows10 and use it via in VM via HDMI desktop.

    With your guide i stuck in this lines after reboot,
    ...
    07:39:58 kernel: MODSIGN: Couldn't get UEFI db list
    07:39:58 kernel: Couldn't get size: 0x800000000000000e
    ....
    if i remove "options vfio-pci ids=8086:3185". the proxmox system boots OK.

    I did your guide but after reboot the server not start anymore.
    Your guide is completely?

    You have graphics, graphic-passthrough working?
    Best Regards
     
    #2 psycmos, Sep 26, 2018
    Last edited: Sep 27, 2018
  3. collider

    collider New Member

    Joined:
    Sep 29, 2018
    Messages:
    2
    Likes Received:
    0
    I'm on ASRock J5005-ITX and trying to pass through the IGD to windows 10. After a similar setup and running the VM, the screen connected to the IGD became black but never showed anything. I didn't see any error log.

    Is there anyone can pass through the IGP on Gemini Lake chip?
     
  4. derMischka

    derMischka New Member

    Joined:
    Sep 29, 2018
    Messages:
    2
    Likes Received:
    0
    Hi,

    I have also a ASRock J4105-ITX and try to passthrough the iGPU and have also the same black screen. Beside the error
    ...
    kernel: MODSIGN: Couldn't get UEFI db list
    kernel: Couldn't get size: 0x800000000000000e
    ....

    I can see the following lines with dmesg
    ...
    [ 1.869667] [drm] Memory usable by graphics device = 4096M
    [ 1.869670] checking generic (90000000 160000) vs hw (800000000 10000000)
    [ 1.869671] [drm] Replacing VGA console driver
    [ 1.869777] [drm:i915_gem_init_stolen [i915]] *ERROR* conflict detected with stolen region: [0x0000000060000000 - 0x0000000080000000]
    [ 1.869830] [drm] RC6 disabled by BIOS
    [ 1.884806] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
    [ 1.884807] [drm] Driver supports precise vblank timestamp query.
    [ 1.890293] i915 0000:01:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:eek:wns=none
    [ 1.893689] i915 0000:01:00.0: Direct firmware load for i915/glk_dmc_ver1_04.bin failed with error -2
    [ 1.893693] i915 0000:01:00.0: Failed to load DMC firmware i915/glk_dmc_ver1_04.bin. Disabling runtime power management.
    [ 1.893694] i915 0000:01:00.0: DMC firmware homepage: *h*t*t*p*s*:*//01.org/linuxgraphics/downloads/firmware
    [ 1.898260] [drm] RC6 disabled, disabling runtime PM support
    [ 1.898650] [drm] Initialized i915 1.6.0 20171023 for 0000:01:00.0 on minor 0
    [ 1.898962] [drm] Cannot find any crtc or sizes
    ...
    [ 3.836366] [drm] RC6 off
    [ 7.880998] [drm] GPU HANG: ecode 9:0:0x4dd15de1, reason: No progress on rcs0, action: reset
    [ 7.881020] i915 0000:01:00.0: Resetting rcs0 after gpu hang
    [ 8.816180] random: fast init done
    [ 9.096373] [drm:gen8_reset_engines [i915]] *ERROR* rcs0: reset request timeout
    [ 9.096449] i915 0000:01:00.0: Resetting chip after gpu hang
    [ 9.096546] random: systemd-udevd: uninitialized urandom read (16 bytes read)
    [ 9.096559] random: systemd-udevd: uninitialized urandom read (16 bytes read)
    [ 9.096582] random: systemd-udevd: uninitialized urandom read (16 bytes read)
    ...
    [ 10.312122] [drm:gen8_reset_engines [i915]] *ERROR* rcs0: reset request timeout
    [ 10.312153] [drm:i915_reset [i915]] *ERROR* Failed to reset chip: -5
    ...

    I read in other forums that there is a need of a second gpu for the proxmox host. I hope that this is not true. I need no gpu for proxmox - has a really good web-Interface and ssh solve the rest the configuration.

    Michael
     
  5. PIKAMO

    PIKAMO New Member

    Joined:
    Sep 29, 2018
    Messages:
    1
    Likes Received:
    0
    no efidisk configured! Using temporary efivars disk.
    kvm: -device vfio-pci,host=00:02.0,addr=0x02,x-igd-gms=1,x-igd-opregion=on: IGD device 0000:00:02.0 has no ROM, legacy mode disabled
    TASK OK
    I try to install Ubuntu before change config, but it doesn't help.What should I do now? if you any technical assistance then come here -
    [Acer Support Number][1]

    Thanks
     
  6. collider

    collider New Member

    Joined:
    Sep 29, 2018
    Messages:
    2
    Likes Received:
    0
    I'm on a ASRock J5005-ITX and have the following setup:

    - Fresh install Proxmox 5.2
    - Grub
    - vim /etc/default/grub
    - change GRUB_CMDLINE_LINUX_DEFAULT line to GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on video=efifb=off,vesafb=off"
    - save and quit
    - update-grub

    - Blacklist module
    - vim /etc/modprobe.d/pve-blacklist.conf
    - add these lines
    blacklist snd_hda_intel
    blacklist snd_hda_codec_hdmi
    blacklist i915
    - save and quit

    - VFIO
    - vim /etc/modules
    - add these lines
    vfio
    vfio_iommu_type1
    vfio_pci
    vfio_virqfd
    - save and quit

    - Vga adapter
    - lspci -n -s 00:02
    - lspci command display 00:02.0 0300: 8086:3185 (rev 03)
    - vim /etc/modprobe.d/vfio.conf
    - add this line
    options vfio-pci ids=8086:3185
    - save and quit

    - Initramfs
    - update-initramfs -u

    - Create a VM (id = 100) with a Ubuntu 18 iso as primary boot

    - Change the setup for the VM
    - vim /etc/pve/qemu-server/100.conf
    - add these lines
    machine: pc-i440fx-2.2
    args: -device vfio-pci,host=00:02.0,addr=0x02
    vga: none
    - save and quit

    - Reboot the server

    - Start VM 100

    dmesg shows following error:

    [ 482.554825] DMAR: DRHD: handling fault status reg 3
    [ 482.554836] DMAR: [DMA Write] Request device [00:02.0] fault addr 0 [fault reason 02] Present bit in context entry is clear
    [ 483.425246] vfio_ecap_init: 0000:00:02.0 hiding ecap 0x1b@0x100
    [ 483.426689] vfio-pci 0000:00:02.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0xe486
    [ 484.337918] vfio-pci 0000:00:02.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0xe486

    the screen shows nothing. Anyone can help?
     
  7. vobo70

    vobo70 Member

    Joined:
    Nov 15, 2017
    Messages:
    45
    Likes Received:
    1
    Hello,
    I'm planning to buy Asrock J5005-ITX motherboard for home server.
    Can anyone test if PCIe card (any - in my case it will be Intel 2 port GbE) can be passed through to VM?
    I would use pfSense as router on proxmox VM.
    Thank you for any answer.
     
  8. psycmos

    psycmos New Member

    Joined:
    Sep 26, 2018
    Messages:
    4
    Likes Received:
    0
    I can't get any signal with IGP in proxmox and UNRAID.
    If you plan buy it for using graphics passthrough too, i think it will not work.

    About passthrough PCIe card, i think you can do it without any problem.
     
  9. n1nj4888

    n1nj4888 Member

    Joined:
    Jan 13, 2019
    Messages:
    44
    Likes Received:
    0
    Hi All,

    Has anyone successfully got this working with Proxmox 5.3? I'd like to try to passthrough a Gemini Lake iGP (for hardware transcoding with Plex) to an Ubuntu VM so keen to see if anyone has got this working successfully without upgrading kernels etc?

    Thanks!
     
  10. derMischka

    derMischka New Member

    Joined:
    Sep 29, 2018
    Messages:
    2
    Likes Received:
    0
    I also failed with the Proxmox 5.3

    Hope also for a solution ....
     
  11. Alexander90

    Alexander90 New Member

    Joined:
    Jun 5, 2018
    Messages:
    3
    Likes Received:
    0
    Same here. Need a Solution
     
  12. comport

    comport New Member

    Joined:
    Apr 10, 2019
    Messages:
    2
    Likes Received:
    0
    So are there any new hints for this problem?

    I'm also on a J4105, trying to passthrough the UHD Graphics 600. The solution is probably applicable to/from all Gemini Lake IGD, like Pentium Silver and Celeron.

    Currently, my IGD is shown as a VGA PCI device in the guest system and initializes the screen upon boot, but without any signal:

    Failed to initialize GPU, declaring it wedged
     
  13. psycmos

    psycmos New Member

    Joined:
    Sep 26, 2018
    Messages:
    4
    Likes Received:
    0
    I bought one GT 710....
     
  14. n1nj4888

    n1nj4888 Member

    Joined:
    Jan 13, 2019
    Messages:
    44
    Likes Received:
    0
    Do you need it to be the primary GPU? If not, does it work with “Default” as primary GPU and the IGP as secondary?
     
  15. comport

    comport New Member

    Joined:
    Apr 10, 2019
    Messages:
    2
    Likes Received:
    0
    No, the IGP is not detected as a second screen, although it shows up as a VGA device in lspci (same as OP).

    I intend to output hardware accelerated video via HDMI. What kind of setup worked for you?
     
    #15 comport, Apr 10, 2019
    Last edited: Apr 12, 2019
  16. RedBass389

    RedBass389 New Member

    Joined:
    Apr 15, 2019
    Messages:
    5
    Likes Received:
    0
    I have the Asrock J4105M and am having the same problems. As dedicated GPU, screen is black in windows 10 and Ubuntu 18 LTS. If I run it as a second GPU it says "ERROR conflict detected with stolen region" but it is clearly appearing to the ubuntu VM. We need some work around to get the intel HD 600 to pass through correctly.

    I am on newest proxmox 5.4-3 and have latest bios from asrock.
     
  17. n1nj4888

    n1nj4888 Member

    Joined:
    Jan 13, 2019
    Messages:
    44
    Likes Received:
    0
    Hi Guys,

    To add to this - I managed to get my NUC7PJYH (Gemini Lake J5005) “working” with GPU Passthrough - At least for headless transcoding (Intel Quicksync usage via the IGP)...

    My setup is different to yours in that I’m using Ubuntu 18.10 Desktop (P2V VM on PVE 5.4-3) and using the NUC headless - I Passthrough the GPU and it works for hardware transcoding (Intel QuickSync) once passed through.

    This is my second NUC as I upgraded to a new Coffee Lake NUC which I use as my primary PVE host (again headless) and also Passthrough that GPU for hardware transcoding using Intel QuickSync. The reason I mention this Coffee Lake NUC is that I installed a working Passthrough VM on this host from scratch whereas on the NUC7PJYH (Gemini Lake) I simply clonezilla’d a working barebones Ubuntu 18.10 Desktop into a PVE VM and then updated PVE to Passthrough the GPU. The steps I took on the Coffee Lake NUC (it would likely be the same on the Gemini Lake NUC but I’d have to test) were as follows:

    (A) Install PVE 5.4-3 and update via apt-get
    (B) Update /etc/default/grub and /etc/modules on host as per https://pve.proxmox.com/wiki/Pci_passthrough
    (C) Reboot host
    (D) Create a new simple VM (UEFI) with no Passthrough (I think I used either “Default” VM Display or SPICE with 128MB) and install Ubuntu Server 18.04.2 LTS (using HWE Kernel in installer options) - The Server version of Ubuntu seems to make things simpler as it doesn’t load a GUI by default - When a GUI was involved, I was getting weird console graphics / hanging / black screens etc...
    (E) Once Ubuntu Server 18.04.2 LTS (HWE Kernel) is installed and working ok, stop the VM.
    (F) Change VM Display (Primary) to “Default 128MB” and add the GPU PCI Passthrough device - Mine was 00:02.0 ...
    (G) Boot the VM and from the VM console command prompt, the IGP is passed through:

    $ ls -l /dev/dri
    total 0
    drwxr-xr-x 2 root root 80 Apr 22 06:39 by-path
    crw-rw----+ 1 root video 226, 0 Apr 22 14:44 card0
    crw-rw----+ 1 root video 226, 128 Apr 22 06:39 renderD128

    The PVE config file for this VM is as follows. CPU is currently set as “host” (this may or may not be important if the VM needs specific host CPU/IGP instructions?) ...

    agent: 1
    bios: ovmf
    boot: dcn
    bootdisk: scsi0
    cores: 4
    cpu: host
    efidisk0: <EFI DISK DETAILS>
    hostpci0: 00:02.0
    ide2: none,media=cdrom
    memory: 8192
    name: <VM HOSTNAME>
    net0: virtio=<MAC DETAILS>
    numa: 0
    ostype: l26
    scsi0: <BOOT DISK DETAILS>
    scsihw: virtio-scsi-pci
    smbios1: uuid=<UUID>
    sockets: 1
    vga: memory=128
    vmgenid: <ID>


    I’m able to use QuickSync hardware transcoding in this setup. As mentioned earlier, the host/VM is headless so not sure if this will help / trigger anything with those folks experiencing HDMI display issues...
     
  18. bolzerrr

    bolzerrr New Member

    Joined:
    Apr 11, 2019
    Messages:
    5
    Likes Received:
    0
    Hi,

    i have a Asrock J4105M and try to use the GPU as second, bit i still alwaya run into:
    DMAR: DRHD: handling fault status reg 3
    DMAR: [DMA Write] Request device [00:02.0] fault addr 0 [fault reason 02] Present bit in context entry is clear

    Next the pc freezes and perform a reboot. Any idea what to do?

    conf:
     
  19. n1nj4888

    n1nj4888 Member

    Joined:
    Jan 13, 2019
    Messages:
    44
    Likes Received:
    0
    I see you set the machine type as Q35. I just use the default for my setup so perhaps you could take a look at changing that?
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice