LXC - Pass-through Intel Integrated Graphics

mjb2000

New Member
Jul 16, 2015
22
1
1
I have an Intel J1900 Baytrail motherboard which I use at home as a:
  • pfSesne router
  • NAS (OpenMediaVault)
  • Personal cloud server (OwnCloud)
  • Legal Torrent downloader (rTorrent + ruTorrent)
  • Home PBX (FreePBX)
  • Media catalogue (EMBY)
  • Media player (Kodi)

The J1900 board does not support VT-d so to be able to support HDMI out I have installed Kodi to the main Proxmox installation (not in a KVM or container). Up until now I have also used KVM machines for all the above processes.

I am keen to move to LXC containers as this seems to be a much more efficient way of doing things (with the exception of FreeBSD based pfSense which I understand can't be in a container?).

My main question is about moving Kodi to a container. Is this possible to do? Can the container output HDMI video and audio from the physical hardware? How can this be achieved? I have seen a lot of snippets and posts from various sources that suggests it's possible but I think my base knowledge of Linux is not strong enough to understand all the steps required.

Secondly, I guess another valid questions would be is it worth doing this? Are there security/reliability improvements in moving Kodi to a container? Would a crash of Kodi be confined just to the container, or is it just as likely to bring down the entire system?

Any tips would be gratefully received! :)

Matt
 
Hi Matt
You can't do PCI passthrough with a LXC container.
The only downside I see to your setup, is that upgrading you main system would be a bit a more complicated becasue of the interwinining of the Kodi and Proxmox packages on the LXC host.
 
Sorry, I probably shouldn't have used the word "pass-through" I know you can't pass-through physical hardware in LXC (and because I don't have VT-d I can't pass-through in KVM either.

However, my understanding is that you can passthrough any device in the host kernel to the container and the container can use it directly. It this not possible? Or is it just not possible for video devices?

I agree, maintenance becomes tricky by installing Kodi directly on to Proxmox - that's why I'm keen to put it in a container if possible.

M
 
Yes, I have seen this before which is why I think it should be possible to achieve - but it's too complicated for me to understand and I'm not entirely sure how it translates to various commands within ProxMox or the CLI.

HELP! :)
 
yes, it's possible to "passthrough" with lxc.

it's just a simple "mount -o bind /dev/.. /lxcroot/dev" .

But this need to be achieve through lxc hooks. (we already do it to passthrouh disk devices).

I'll try to see if I can add support for any devices in coming weeks.
 
So is there a proxmox way to do this to ensure the passthrough is setup when I bring up my LXC container. I am using the 4.0 beta.
 
I'll try to see if I can add support for any devices in coming weeks.

Hi Spirit - Have you had any luck adding support for this to Proxmox, can you give an example of what config changes would need to be made to 'passthrough' a graphics card so I can get a video output from one of my containers?

M
 
I just played with x2go on a debian jessie LXC container - maybe also interesting for you?
 
Thanks Tom - I since I am looking to output full-hd video from Kodi right from the box I guess x2go isn't going to work for me... Or are you using the x2go server in the container and x2go client on the host? Would that work particularly well for video or would it me and unnecessary amount of overhead?
 
Thanks Tom - I since I am looking to output full-hd video from Kodi right from the box I guess x2go isn't going to work for me... Or are you using the x2go server in the container and x2go client on the host? Would that work particularly well for video or would it me and unnecessary amount of overhead?

full-hd looks not perfect with x2go.
 
Did anybody manage to do this? Running kodi in a lxc container using the host video/audio capabilities?
 
Hi,

I'm trying to do the same !

I'm going to use export display to use kodi on a LXC container and pulse audio

I'll tell you if it's working !

Regards
 
hi,

did you have any succcess with this? i am trying to pass the iGPU to a plex server container to utilize intel quick sync hw encoding...but are desperatly failing (for now)...
 
anyone found solution ?
I have i7 4790k, I'd use quick sync, but unfortnately no succeed
 
I setup a ffmpeg transcoding container today.

In order for intel gpu passtrough to work with lxc you have to do the following:
  1. Create privileged container (uncheck unprivileged during container creation)
  2. Edit the container config /etc/pve/lxc/<id>.conf and add the following:
    1. Code:
      lxc.cgroup.devices.allow: c 226:0 rwm
      lxc.cgroup.devices.allow: c 226:128 rwm
      lxc.cgroup.devices.allow: c 29:0 rwm
      lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
      lxc.mount.entry: /dev/fb0 dev/fb0 none bind,optional,create=file
  3. Install intel va drivers inside the container
    1. apt install vainfo i965-va-driver ffmpeg

This way ffmpeg transcoding works specifying /dev/dri/renderD128 as vaapi device, however i had problems with ffmpeg in the debian buster lxc template, using stretch (debian 9) instead works. There must be some breaking change in buster, as running ffmpeg directly on the hypervisor did also not work.

I havent tested it but this setup should work with plex, emby etc. as well.
 
Having the same issue, and while the above additions worked, I winded up with some ownership errors:
Bash:
root@jellyfin:~# ls /dev/dri -al
total 0
drwxr-xr-x 3 nobody nogroup      100 May 13 00:03 .
drwxr-xr-x 7 root   root         520 May 14 01:57 ..
drwxr-xr-x 2 nobody nogroup       80 May 13 00:03 by-path
crw-rw---- 1 nobody nogroup 226,   0 May 13 00:03 card0
crw-rw---- 1 nobody nogroup 226, 128 May 13 00:03 renderD128

Any ideas on how to fix the nobody and no groups?

lxe conf:
Bash:
arch: amd64
cores: 4
features: nesting=1
hostname: jellyfin
memory: 8192
mp0: /storage/Media,mp=/mnt/smb_media_mnt
net0: name=eth0,bridge=vmbr1,gw=x.x.x.x,hwaddr=x:x:x:x:x:x,ip=x.x.x.x/2>
onboot: 1
ostype: debian
parent: file-working
rootfs: local-lvm:vm-100-disk-0,size=6G
swap: 8192
unprivileged: 1
lxc.idmap: u 0 100000 1000
lxc.idmap: g 0 100000 1000
lxc.idmap: u 1000 1000 1
lxc.idmap: g 1000 1000 1
lxc.idmap: u 1001 101001 64530
lxc.idmap: g 1001 101001 64530
lxc.cgroup.devices.allow: c 226:0 rwm
lxc.cgroup.devices.allow: c 226:128 rwm
lxc.cgroup.devices.allow: c 29:0 rwm
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry: /dev/fb0 dev/fb0 none bind,optional,create=file
 
Having the same issue, and while the above additions worked, I winded up with some ownership errors:
Bash:
root@jellyfin:~# ls /dev/dri -al
total 0
drwxr-xr-x 3 nobody nogroup      100 May 13 00:03 .
drwxr-xr-x 7 root   root         520 May 14 01:57 ..
drwxr-xr-x 2 nobody nogroup       80 May 13 00:03 by-path
crw-rw---- 1 nobody nogroup 226,   0 May 13 00:03 card0
crw-rw---- 1 nobody nogroup 226, 128 May 13 00:03 renderD128

Any ideas on how to fix the nobody and no groups?

lxe conf:
Bash:
arch: amd64
cores: 4
features: nesting=1
hostname: jellyfin
memory: 8192
mp0: /storage/Media,mp=/mnt/smb_media_mnt
net0: name=eth0,bridge=vmbr1,gw=x.x.x.x,hwaddr=x:x:x:x:x:x,ip=x.x.x.x/2>
onboot: 1
ostype: debian
parent: file-working
rootfs: local-lvm:vm-100-disk-0,size=6G
swap: 8192
unprivileged: 1
lxc.idmap: u 0 100000 1000
lxc.idmap: g 0 100000 1000
lxc.idmap: u 1000 1000 1
lxc.idmap: g 1000 1000 1
lxc.idmap: u 1001 101001 64530
lxc.idmap: g 1001 101001 64530
lxc.cgroup.devices.allow: c 226:0 rwm
lxc.cgroup.devices.allow: c 226:128 rwm
lxc.cgroup.devices.allow: c 29:0 rwm
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry: /dev/fb0 dev/fb0 none bind,optional,create=file

I just dealt with the same thing. You'll want to find out which groups own those devices on the host and map their Id's to the same groups in the container. For example, on my host, the devices are owned by video and render and their id's are 44 and 104:
Code:
root@pve:~# ls /dev/dri -la
total 0
drwxr-xr-x  3 root root        100 Nov 18 11:31 .
drwxr-xr-x 23 root root       6460 Nov 20 18:36 ..
drwxr-xr-x  2 root root         80 Nov 20 16:49 by-path
crw-rw----  1 root video  226,   0 Nov 20 16:49 card0
crw-rw----  1 root render 226, 128 Nov 18 11:31 renderD128
root@pve:~# cat /etc/group | grep -e "render" -e "video"
video:x:44:
render:x:104:

On the LXC the id's for those groups are 44 and 107:
Code:
root@PlexCT:~# cat /etc/group | grep -e "render" -e "video"
video:x:44:plex
render:x:107:plex

So in my lxc.conf I map 44 to 44 and 104 to 107 (I also map 1001-1010 for something unrelated.)
Code:
lxc.idmap: g 0 100000 44
lxc.idmap: g 44 44 1
lxc.idmap: g 45 100045 62
lxc.idmap: g 107 104 1
lxc.idmap: g 108 100107 893
lxc.idmap: u 0 100000 1000
lxc.idmap: u 1001 1001 10
lxc.idmap: g 1001 1001 10
lxc.idmap: u 1011 101011 64525
lxc.idmap: g 1011 101011 64525
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!