Proxmox Intel Iris XE Graphic Passthrough (Core i7-1165G7)

RNab

Member
Jun 20, 2021
31
3
13
34
Hi guys,

I'm fairly new to proxmox and virtualisation in general. I have setup some VM, one of them being my media center that is working absolutely fine in DirectPlay.
For transcoding, it's definitely not :)

I have a NUC11, with a Core-i7-1165G7 and IRIS XE graphic.

I've been trying to figure out how to passthrough the intel iris xe graphic to my VM but i've struggled to find a way to do that.

I've followed this step by step : https://blog.ktz.me/passthrough-intel-igpu-with-gvt-g-to-a-vm-and-use-it-with-plex/

But got stuck at the last past, where "ls /sys/bus/pci/devices/0000\:00\:02.0/mdev_supported_types/" returns an empty results. So not sure what to do to fix it, or what to do next. Not even sure if that step by step is relevant for my setup.

Would you guys be able to advise what to do to get there ?

Happy to provide any information required.

Thanks for the read/help.
 
Yes indeed. This is actually what’s described in the tutorial as well. But no luck.
I have read somewhere that Intel doesn’t support gvt-g with the 11th gen proc.

Also, i've check lspci, and I get the following for device 00:02.0 :


00:02.0 VGA compatible controller: Intel Corporation Device 9a49 (rev 01) (prog-if 00 [VGA controller])
Subsystem: Intel Corporation Device 3004
Flags: fast devsel, IRQ 16
Memory at 603c000000 (64-bit, non-prefetchable) [size=16M]

Memory at 4000000000 (64-bit, prefetchable) [size=256M]
I/O ports at 3000
[virtual] Expansion ROM at 000c0000 [disabled] [size=128K]

Capabilities: [40] Vendor Specific Information: Len=0c <?>
Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00
Capabilities: [ac] MSI: Enable- Count=1/1 Maskable+ 64bit-
Capabilities: [d0] Power Management version 2
Capabilities: [100] Process Address Space ID (PASID)
Capabilities: [200] Address Translation Service (ATS)
Capabilities: [300] Page Request Interface (PRI)
Capabilities: [320] Single Root I/O Virtualization (SR-IOV)
Kernel driver in use: vfio-pci
Kernel modules: i915
 
Last edited:
Hi guys,

I'm fairly new to proxmox and virtualisation in general. I have setup some VM, one of them being my media center that is working absolutely fine in DirectPlay.
For transcoding, it's definitely not :)

I have a NUC11, with a Core-i7-1165G7 and IRIS XE graphic.

I've been trying to figure out how to passthrough the intel iris xe graphic to my VM but i've struggled to find a way to do that.

I've followed this step by step : https://blog.ktz.me/passthrough-intel-igpu-with-gvt-g-to-a-vm-and-use-it-with-plex/

But got stuck at the last past, where "ls /sys/bus/pci/devices/0000\:00\:02.0/mdev_supported_types/" returns an empty results. So not sure what to do to fix it, or what to do next. Not even sure if that step by step is relevant for my setup.

Would you guys be able to advise what to do to get there ?

Happy to provide any information required.

Thanks for the read/help.

I'm trying to do the same w/ my i5 NUC11. Apparently Iris Xe does not support GVT-g so you can go ahead and abandon the mediated device route. I've managed to get "passthrough" working only as a non-primary GPU and rombar=0. However with my past experience even though the device shows up in my guest (Windows 10) machine, Plex does not see it as available for HW acceleration.
 
Could you let me know how you've done that ? I'd like to give it a try, with JellyFin
 
Could you let me know how you've done that ? I'd like to give it a try, with JellyFin
I ended up abandoning the effort to get the Iris Xe passed through to a Windows VM. I only ever managed to get it to show up as a device but no software could actually use it.

However I appear to have QuickSync working in a Plex docker container hosted in an Ubuntu LXC container. In this setup there's not really any virtualization happening, but it looks like I can at least utilize HW for transcoding.
 
General overview of my setup:

CT Template: ubuntu-21.04-standard_21.04-1_amd64.tar.gz
I tried Debian first, but there was something wonky with it (never got a login prompt when opening a shell)

There's also some relevant options in the .conf to give access to some devices in pve host /dev/dri:

Code:
lxc.cgroup.devices.allow: c 226:0 rwm
lxc.cgroup.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file


installed docker-ce package in the pve container, using linuxserver/plex docker hub image. Sharing device /dev/dri:/dev/dri

I had to
Bash:
chmod -R 777 /dev/dri
on host before I saw HW decode/encode in plex dashboard.
 
Last edited:
Hmmmm I've tried that but keep getting errors.

The logs says (i'm using Jellyfin btw) :


[AVHWDeviceContext @ 0x5616204c1800] libva: /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so has no function __vaDriverInit_1_0
[AVHWDeviceContext @ 0x5616204c1800] libva: /usr/lib/jellyfin-ffmpeg/lib/dri/i965_drv_video.so init failed
[AVHWDeviceContext @ 0x5616204c1800] Failed to initialise VAAPI connection: -1 (unknown libva error).
Device creation failed: -5.
Failed to set value '/dev/dri/renderD128' for option 'vaapi_device': Input/output error
 
Hmmmm I've tried that but keep getting errors.

The logs says (i'm using Jellyfin btw) :


[AVHWDeviceContext @ 0x5616204c1800] libva: /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so has no function __vaDriverInit_1_0
[AVHWDeviceContext @ 0x5616204c1800] libva: /usr/lib/jellyfin-ffmpeg/lib/dri/i965_drv_video.so init failed
[AVHWDeviceContext @ 0x5616204c1800] Failed to initialise VAAPI connection: -1 (unknown libva error).
Device creation failed: -5.
Failed to set value '/dev/dri/renderD128' for option 'vaapi_device': Input/output error

maybe a permission issue? make sure that your Jellyfin container (and parent container, if applicable) has full access to /dev/dri/card0 and /dev/dri/renderD128

Theres an intel-media-va-driver package, which I assume would be included w Jellyfin if it's needed but you can try installing it.

 
I believe they do, but, could you help me figure how I could check that ?
 
I believe they do, but, could you help me figure how I could check that ?

If you are setup like me, your containers are nested as such: (L1) PVE Host [ (L2) LXC Container running Docker Engine [ (L3) Jellyfin docker container ]]

- At L1, run command
Code:
#chmod -R 777 /dev/dri
to open up access
- Configure L2 (through the LXC .conf) to mount /dev/dri/card0 and /dev/dri/ from host into it. NOTE: The "226:0" and "226:128" values in lines I provided previously may be different for you. they are the major and minor (device?) numbers, which are shown when you get a listing of the directory (ls -la /dev/dri)
- Configure L3 docker container to allow access to L2's /dev/dri (which is really L1 /dev/dri). I use Portianer to manage containers and the option to pass through devices is under advanced options, runtime & resources, devices. host /dev/dri => container /dev/dri
 
Yes, I have the same setup. I didn't run it on L1, before but after doing so now, it didn't change anything.

On L2, I conveniently have the same number. I have added that in the .conf file :

lxc.cgroup.devices.allow = c 226:0 rwm lxc.cgroup.devices.allow = c 226:128 rwm

On L3, I did that through docker-compose, using linuxserve.io image :

Code:
    group_add:
      - 103
    devices:
      - /dev/dri:/dev/dri
 
Yes, I have the same setup. I didn't run it on L1, before but after doing so now, it didn't change anything.

On L2, I conveniently have the same number. I have added that in the .conf file :

lxc.cgroup.devices.allow = c 226:0 rwm lxc.cgroup.devices.allow = c 226:128 rwm

On L3, I did that through docker-compose, using linuxserve.io image :

Code:
    group_add:
      - 103
    devices:
      - /dev/dri:/dev/dri

Hmm, not sure what else I'm missing. I may try to duplicate your setup on my host if I get a chance
 
Hmm, not sure what else I'm missing. I may try to duplicate your setup on my host if I get a chance
I'd be interested to see if you could replicate it on bare metal. As there have been a handful of threads on the Plex forums still trying to get it to work regularly without issues. Getting hw transcoding to work is possible but after 5-15min it causes the whole server to go to blank screen and lock up (if using a desktop I've tried both). It's definitely Kernel based and we've had the best results with 21.04.
 
Thanks for the hlep. I'm really running out of idea here (probably a bit out of my depth, but that's how we learn :) )
 
Sorry I've been quite on the topic. I seem to have lost track of what worked but seems now the best I've been able to do is get Plex using HW is running directly in LXC container with HDR tone mapping turned OFF. I think I also had to install intel-media-va-driver from unstable ubuntu sid repository and run 5.11 PVE kernel just to get vainfo to properly report capabilities. Pretty sure I managed to crash the whole machine or put the gpu in an otherwise broken state by turning on HDR tone mapping
 
Hello there

I'm using proxmox ve for a year now and be very happy with it. My old system was a skylake based celeron machine with working igpu passthrough. my new system is a tiger lake based system (i7-1165g7) and want to have working igpu passthrough as well. To make it work on the celeron system i followed this guide here

pci gpu passthrough

I set this up also on the tiger lake machine but i'm not able to get any output on the screen connected to the hdmi port.

here is lspci output of my host
pve-host-lspci.png

lspci of my vm
vm-lspci.png

So the iris xe graphics gpu is recognized on both the host and the vm.

here is the hw setup of the vm
vm-hw.png

i also blacklisted the i915 driver.

here is the setup with pcie gpu passthrough

pcie gpu passthrough

lspci of vm
vm-pcie.png

iris xe graphics gpu is recognized as well

hw of vm
vm-pcie-hw.png

The lxc passthrough way is not an option for me.

i'm using proxmox ve 7.0
Do you have any ideas what the problems could be?

Regards
Rudi
 
Last edited:
  • Like
Reactions: teo984
Hi Rudi,

I have the same problem with my NUC 11 i3-1115G4. Do u have a fix? Does it run now?

Regards
Marcel
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!