Proxmox 5.2 Gemini Lake and IGD (graphics) passthrough for Ubuntu 18

Discussion in 'Proxmox VE: Installation and configuration' started by iceplane, Sep 15, 2018.

  1. iceplane

    iceplane New Member

    Joined:
    Sep 15, 2018
    Messages:
    1
    Likes Received:
    1
    Hello,

    I’m trying to setup a fresh install of Proxmox 5.2 on Gemini Lake and I would like to configure a VM with IGD (graphics) passthrough for Ubuntu 18

    Computer based on ASRock J4105-ITX asrock.com/mb/Intel/J4105-ITX/

    A standard install is working properly and now I would like to use HDMI output for a VM with Ubuntu 18.

    I have read all these informations :
    pve.proxmox.com/wiki/Pci_passthrough
    forum.proxmox.com/threads/guide-intel-intergrated-graphic-passthrough.30451/
    redhat.com/archives/vfio-users/2017-April/msg00032.html
    forum.proxmox.com/threads/proxmox-5-0-kaby-lake-and-igd-graphics-passthrough-for-windows-10.36165/

    My setup is like this:
    - Fresh install Proxmox 5.2
    - Grub
    - vim /etc/default/grub
    - change GRUB_CMDLINE_LINUX_DEFAULT line to GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on video=efifb=off,vesafb=off"
    - save and quit
    - update-grub

    - Blacklist module
    - vim /etc/modprobe.d/pve-blacklist.conf
    - add these lines
    blacklist snd_hda_intel
    blacklist snd_hda_codec_hdmi
    blacklist i915
    - save and quit

    - VFIO
    - vim /etc/modules
    - add these lines
    vfio
    vfio_iommu_type1
    vfio_pci
    vfio_virqfd
    - save and quit

    - Vga adapter
    - lspci -n -s 00:02
    - lspci command display 00:02.0 0300: 8086:3185 (rev 03)
    - vim /etc/modprobe.d/vfio.conf
    - add this line
    options vfio-pci ids=8086:3185
    - save and quit

    - Initramfs
    - update-initramfs -u

    - Create a VM (id = 100) with a Ubuntu 18 iso as primary boot

    - Change the setup for the VM
    - vim /etc/pve/qemu-server/100.conf
    - add these lines
    machine: pc-i440fx-2.2
    args: -device vfio-pci,host=00:02.0,addr=0x02
    vga: none
    - save and quit

    - Reboot the server

    - Start VM 100
    - Video output is initialised (clear screen) just after the VM 100 is started but the screen remains black. Start task log is:
    no efidisk configured! Using temporary efivars disk.
    kvm: -device vfio-pci,host=00:02.0,addr=0x02,x-igd-gms=1,x-igd-opregion=on: IGD device 0000:00:02.0 has no ROM, legacy mode disabled
    TASK OK


    I try to install Ubuntu before change config, but it doesnt help.

    What shoud i do now?
     
    psycmos likes this.
  2. psycmos

    psycmos New Member

    Joined:
    Sep 26, 2018
    Messages:
    3
    Likes Received:
    0
    Hi @iceplane
    I have one Asrock J4105M and i want to try install windows10 and use it via in VM via HDMI desktop.

    With your guide i stuck in this lines after reboot,
    ...
    07:39:58 kernel: MODSIGN: Couldn't get UEFI db list
    07:39:58 kernel: Couldn't get size: 0x800000000000000e
    ....
    if i remove "options vfio-pci ids=8086:3185". the proxmox system boots OK.

    I did your guide but after reboot the server not start anymore.
    Your guide is completely?

    You have graphics, graphic-passthrough working?
    Best Regards
     
    #2 psycmos, Sep 26, 2018
    Last edited: Sep 27, 2018
  3. collider

    collider New Member

    Joined:
    Sep 29, 2018
    Messages:
    2
    Likes Received:
    0
    I'm on ASRock J5005-ITX and trying to pass through the IGD to windows 10. After a similar setup and running the VM, the screen connected to the IGD became black but never showed anything. I didn't see any error log.

    Is there anyone can pass through the IGP on Gemini Lake chip?
     
  4. derMischka

    derMischka New Member

    Joined:
    Sep 29, 2018
    Messages:
    2
    Likes Received:
    0
    Hi,

    I have also a ASRock J4105-ITX and try to passthrough the iGPU and have also the same black screen. Beside the error
    ...
    kernel: MODSIGN: Couldn't get UEFI db list
    kernel: Couldn't get size: 0x800000000000000e
    ....

    I can see the following lines with dmesg
    ...
    [ 1.869667] [drm] Memory usable by graphics device = 4096M
    [ 1.869670] checking generic (90000000 160000) vs hw (800000000 10000000)
    [ 1.869671] [drm] Replacing VGA console driver
    [ 1.869777] [drm:i915_gem_init_stolen [i915]] *ERROR* conflict detected with stolen region: [0x0000000060000000 - 0x0000000080000000]
    [ 1.869830] [drm] RC6 disabled by BIOS
    [ 1.884806] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
    [ 1.884807] [drm] Driver supports precise vblank timestamp query.
    [ 1.890293] i915 0000:01:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:eek:wns=none
    [ 1.893689] i915 0000:01:00.0: Direct firmware load for i915/glk_dmc_ver1_04.bin failed with error -2
    [ 1.893693] i915 0000:01:00.0: Failed to load DMC firmware i915/glk_dmc_ver1_04.bin. Disabling runtime power management.
    [ 1.893694] i915 0000:01:00.0: DMC firmware homepage: *h*t*t*p*s*:*//01.org/linuxgraphics/downloads/firmware
    [ 1.898260] [drm] RC6 disabled, disabling runtime PM support
    [ 1.898650] [drm] Initialized i915 1.6.0 20171023 for 0000:01:00.0 on minor 0
    [ 1.898962] [drm] Cannot find any crtc or sizes
    ...
    [ 3.836366] [drm] RC6 off
    [ 7.880998] [drm] GPU HANG: ecode 9:0:0x4dd15de1, reason: No progress on rcs0, action: reset
    [ 7.881020] i915 0000:01:00.0: Resetting rcs0 after gpu hang
    [ 8.816180] random: fast init done
    [ 9.096373] [drm:gen8_reset_engines [i915]] *ERROR* rcs0: reset request timeout
    [ 9.096449] i915 0000:01:00.0: Resetting chip after gpu hang
    [ 9.096546] random: systemd-udevd: uninitialized urandom read (16 bytes read)
    [ 9.096559] random: systemd-udevd: uninitialized urandom read (16 bytes read)
    [ 9.096582] random: systemd-udevd: uninitialized urandom read (16 bytes read)
    ...
    [ 10.312122] [drm:gen8_reset_engines [i915]] *ERROR* rcs0: reset request timeout
    [ 10.312153] [drm:i915_reset [i915]] *ERROR* Failed to reset chip: -5
    ...

    I read in other forums that there is a need of a second gpu for the proxmox host. I hope that this is not true. I need no gpu for proxmox - has a really good web-Interface and ssh solve the rest the configuration.

    Michael
     
  5. PIKAMO

    PIKAMO New Member

    Joined:
    Sep 29, 2018
    Messages:
    1
    Likes Received:
    0
    no efidisk configured! Using temporary efivars disk.
    kvm: -device vfio-pci,host=00:02.0,addr=0x02,x-igd-gms=1,x-igd-opregion=on: IGD device 0000:00:02.0 has no ROM, legacy mode disabled
    TASK OK
    I try to install Ubuntu before change config, but it doesn't help.What should I do now? if you any technical assistance then come here -
    [Acer Support Number][1]

    Thanks
     
  6. collider

    collider New Member

    Joined:
    Sep 29, 2018
    Messages:
    2
    Likes Received:
    0
    I'm on a ASRock J5005-ITX and have the following setup:

    - Fresh install Proxmox 5.2
    - Grub
    - vim /etc/default/grub
    - change GRUB_CMDLINE_LINUX_DEFAULT line to GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on video=efifb=off,vesafb=off"
    - save and quit
    - update-grub

    - Blacklist module
    - vim /etc/modprobe.d/pve-blacklist.conf
    - add these lines
    blacklist snd_hda_intel
    blacklist snd_hda_codec_hdmi
    blacklist i915
    - save and quit

    - VFIO
    - vim /etc/modules
    - add these lines
    vfio
    vfio_iommu_type1
    vfio_pci
    vfio_virqfd
    - save and quit

    - Vga adapter
    - lspci -n -s 00:02
    - lspci command display 00:02.0 0300: 8086:3185 (rev 03)
    - vim /etc/modprobe.d/vfio.conf
    - add this line
    options vfio-pci ids=8086:3185
    - save and quit

    - Initramfs
    - update-initramfs -u

    - Create a VM (id = 100) with a Ubuntu 18 iso as primary boot

    - Change the setup for the VM
    - vim /etc/pve/qemu-server/100.conf
    - add these lines
    machine: pc-i440fx-2.2
    args: -device vfio-pci,host=00:02.0,addr=0x02
    vga: none
    - save and quit

    - Reboot the server

    - Start VM 100

    dmesg shows following error:

    [ 482.554825] DMAR: DRHD: handling fault status reg 3
    [ 482.554836] DMAR: [DMA Write] Request device [00:02.0] fault addr 0 [fault reason 02] Present bit in context entry is clear
    [ 483.425246] vfio_ecap_init: 0000:00:02.0 hiding ecap 0x1b@0x100
    [ 483.426689] vfio-pci 0000:00:02.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0xe486
    [ 484.337918] vfio-pci 0000:00:02.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0xe486

    the screen shows nothing. Anyone can help?
     
  7. vobo70

    vobo70 Member

    Joined:
    Nov 15, 2017
    Messages:
    41
    Likes Received:
    1
    Hello,
    I'm planning to buy Asrock J5005-ITX motherboard for home server.
    Can anyone test if PCIe card (any - in my case it will be Intel 2 port GbE) can be passed through to VM?
    I would use pfSense as router on proxmox VM.
    Thank you for any answer.
     
  8. psycmos

    psycmos New Member

    Joined:
    Sep 26, 2018
    Messages:
    3
    Likes Received:
    0
    I can't get any signal with IGP in proxmox and UNRAID.
    If you plan buy it for using graphics passthrough too, i think it will not work.

    About passthrough PCIe card, i think you can do it without any problem.
     
  9. n1nj4888

    n1nj4888 New Member

    Joined:
    Jan 13, 2019
    Messages:
    2
    Likes Received:
    0
    Hi All,

    Has anyone successfully got this working with Proxmox 5.3? I'd like to try to passthrough a Gemini Lake iGP (for hardware transcoding with Plex) to an Ubuntu VM so keen to see if anyone has got this working successfully without upgrading kernels etc?

    Thanks!
     
  10. derMischka

    derMischka New Member

    Joined:
    Sep 29, 2018
    Messages:
    2
    Likes Received:
    0
    I also failed with the Proxmox 5.3

    Hope also for a solution ....
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice