[SOLVED] PVE 7.0 LXC Intel Quick Sync passtrough not working anymore

Typhoe

Member
Jan 30, 2019
8
7
23
124
TLDR: lxc.cgroup.devices.allow MUST be changed to lxc.cgroup2.devices.allow
https://forum.proxmox.com/threads/p...strough-not-working-anymore.92025/post-400916

Hi,

with PVE 6.4, adding these lines to /etc/pve/lxc/<container id>.conf was enough to get GPU access in the container.
Code:
lxc.cgroup.devices.allow: c 226:0 rwm
lxc.cgroup.devices.allow: c 226:128 rwm
lxc.cgroup.devices.allow: c 29:0 rwm
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry: /dev/fb0 dev/fb0 none bind,optional,create=file

But with PVE 7.0, it doesn't work anymore...

Am I missing something?

Host (intel NUC Gen 7 Broadwell):
Code:
root@nuc4:~# vainfo
error: can't connect to X server!
libva info: VA-API version 1.10.0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: va_openDriver() returns -1
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_1_8
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.10 (libva 2.10.0)
vainfo: Driver version: Intel i965 driver for Intel(R) Broadwell - 2.4.1
vainfo: Supported profile and entrypoints
      VAProfileMPEG2Simple            :    VAEntrypointVLD
      VAProfileMPEG2Simple            :    VAEntrypointEncSlice
      VAProfileMPEG2Main              :    VAEntrypointVLD
      VAProfileMPEG2Main              :    VAEntrypointEncSlice
      VAProfileH264ConstrainedBaseline:    VAEntrypointVLD
      VAProfileH264ConstrainedBaseline:    VAEntrypointEncSlice
      VAProfileH264Main               :    VAEntrypointVLD
      VAProfileH264Main               :    VAEntrypointEncSlice
      VAProfileH264High               :    VAEntrypointVLD
      VAProfileH264High               :    VAEntrypointEncSlice
      VAProfileH264MultiviewHigh      :    VAEntrypointVLD
      VAProfileH264StereoHigh         :    VAEntrypointVLD
      VAProfileVC1Simple              :    VAEntrypointVLD
      VAProfileVC1Main                :    VAEntrypointVLD
      VAProfileVC1Advanced            :    VAEntrypointVLD
      VAProfileJPEGBaseline           :    VAEntrypointVLD
      VAProfileVP8Version0_3          :    VAEntrypointVLD
root@nuc4:~# ls -lh /dev/dri/ /dev/fb0
crw-rw---- 1 root video 29, 0  6 juil. 17:41 /dev/fb0

/dev/dri/:
total 0
drwxr-xr-x 2 root root         80  6 juil. 17:41 by-path
crw-rw---- 1 root video  226,   0  6 juil. 17:41 card0
crw-rw---- 1 root render 226, 128  6 juil. 17:41 renderD128
root@nuc4:~#

Guest (lxc priviledged container with nfs, with debian buster, updated to bullseye):
Code:
root@p1:~# vainfo
error: XDG_RUNTIME_DIR not set in the environment.
error: can't connect to X server!
error: failed to initialize display
root@p1:~# ls -lh /dev/dri /dev/fb0
crw-rw---- 1 root video 29, 0 Jul  6 15:41 /dev/fb0

/dev/dri:
total 0
drwxr-xr-x 2 root root         80 Jul  6 15:41 by-path
crw-rw---- 1 root video  226,   0 Jul  6 15:41 card0
crw-rw---- 1 root netdev 226, 128 Jul  6 15:41 renderD128
root@p1:~#

I noticed that renderD128 is owned by group netdev on guest, changing to render or video group doesn't solve anything.

Thank you!
 
Last edited:
  • Like
Reactions: XRedShark
lxc.cgroup.devices.allow: c 226:0 rwm
lxc.cgroup.devices.allow: c 226:128 rwm
lxc.cgroup.devices.allow: c 29:0 rwm

Proxmox VE 7.0 defaults to the pure cgroupv2 environment, as v1 will be slowly sunset in systemd and other tooling.

And with that LXC needs a slightly different syntax, so try using lxc.cgroup2.devices.allow
 
Proxmox VE 7.0 defaults to the pure cgroupv2 environment, as v1 will be slowly sunset in systemd and other tooling.

And with that LXC needs a slightly different syntax, so try using lxc.cgroup2.devices.allow
Oh!!!!

Indeed, it solved my problem!

Thank you very much!

Changing lines to :
Code:
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.cgroup2.devices.allow: c 29:0 rwm
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry: /dev/fb0 dev/fb0 none bind,optional,create=file

And it works:
Code:
root@p1:~# vainfo
error: XDG_RUNTIME_DIR not set in the environment.
error: can't connect to X server!
libva info: VA-API version 1.10.0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_10
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.10 (libva 2.10.0)
vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 21.1.1 ()
vainfo: Supported profile and entrypoints
      VAProfileMPEG2Simple            :    VAEntrypointVLD
      VAProfileMPEG2Main              :    VAEntrypointVLD
      VAProfileH264Main               :    VAEntrypointVLD
      VAProfileH264High               :    VAEntrypointVLD
      VAProfileJPEGBaseline           :    VAEntrypointVLD
      VAProfileH264ConstrainedBaseline:    VAEntrypointVLD
      VAProfileVP8Version0_3          :    VAEntrypointVLD
root@p1:~#
 
Last edited:
  • Like
Reactions: ojsef39
@ke5han did you get this working in the end? I updated the cgroup info but still no luck getting any kind of hardware acceleration to work in my LXC...
 
Same here. I have 2 machines with Intelchip (IntelQuicksync). I see the videocard in the CT, but vainfo brings nothing useable back. All drivers are installed and kernelmoduls are loaden in the CT. Unfortunately I don't get a proper output from vainfo already on the host. But same, driver installed and loaded.
 
@ke5han did you get this working in the end? I updated the cgroup info but still no luck getting any kind of hardware acceleration to work in my LXC...
Yes, it works now, I assume you've done all the steps before update cgroup? There are some guide that I found on Reddit and followed them.
 
Yep - followed the guides to the letter, I have renderD128 and card0 in /dev/dri, I even get the correct output from vainfo but still nothing. I must be missing something somewhere…
 
Next experience: If i connect an monitor directly to the proxmoxserver, vainfo works. But nothing in the CT. If i install an Ubuntu 20.04 Server and there an Emby Mediacenter, intel vaapi works also.

Yes, i think we miss something....
 
Next experience: If i connect an monitor directly to the proxmoxserver, vainfo works. But nothing in the CT. If i install an Ubuntu 20.04 Server and there an Emby Mediacenter, intel vaapi works also.

Yes, i think we miss something....

Only works if your lxc container is privileged..
 
It does. Strange. For testing i have build in an other SSD with ubuntu. So the graphic is working fine. No matter. I have given up the construction site. It only makes my head hurt. When I need graphics, I install native. That always works.
 
Which guide(s)?
That was too long ago, and after I discarded it, it doesn't say that anywhere.
Basically, it looks like this:

Code:
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.cgroup2.devices.allow: c 29:0 rwm
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry: /dev/fb0 dev/fb0 none bind,optional,create=file

After that, you can see the graphics card in the CT:
Code:
root@media ~ # lshw -c video
  *-display                 
       description: VGA compatible controller
       product: G200eR2
       vendor: Matrox Electronics Systems Ltd.
       physical id: 0
       bus info: pci@0000:09:00.0
       version: 01
       width: 32 bits
       clock: 33MHz
       capabilities: pm vga_controller bus_master cap_list rom
       configuration: driver=mgag200 latency=0 maxlatency=32 mingnt=16
       resources: irq:17 memory:90000000-90ffffff memory:91800000-91803fff memory:91000000-917fffff memory:c0000-dffff

Otherwise, there is also the possibility to assign a Nvidia card to the LXC.:
https://theorangeone.net/posts/lxc-nvidia-gpu-passthrough/
 
  • Like
Reactions: majorgear
This isn't working for me either.


lshw -c video
*-display
description: VGA compatible controller
product: HD Graphics 630
vendor: Intel Corporation
physical id: 2
bus info: pci@0000:00:02.0
version: 04
width: 64 bits
clock: 33MHz
capabilities: pciexpress msi pm vga_controller bus_master cap_list
configuration: driver=i915 latency=0
resources: iomemory:1f0-1ef iomemory:1f0-1ef irq:132 memory:1ff0000000-1ff0ffffff memory:1fe0000000-1fefffffff ioport:3000(size=64) memory:c0000-dffff























This is my conf file

onboot: 1
ostype: ubuntu
rootfs: local-zfs:subvol-102-disk-0,size=40G
startup: order=2,up=300
swap: 512
lxc.apparmor.profile: unconfined
lxc.cgroup2.devices.allow: a
lxc.cap.drop:
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.cgroup2.devices.allow: c 29:0 rwm
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry: /dev/fb0 dev/fb0 none bind,optional,create=file



What do I need to do?
 
If there's any more information I can provide I'd be glad to. I'm also happy to document the whole thing for others, as I've managed to cobble how to do this, together across the whole internet. Doesn't seem to be a definitive guide.
 
I am struggeling with the same problem. I am wondering what you see in the container if you do a ls -lisa /dev/dri

I am wondering since I see this:

3 0 drwxr-xr-x 2 root root 60 Apr 19 18:12 .
1 0 drwxr-xr-x 8 root root 520 Apr 19 18:12 ..
680 0 crw-rw---- 1 nobody nogroup 226, 128 Apr 10 09:35 renderD128

....and I doubt that vainfo will work if the device is onwed by nobody/nogroup ......

I found a guide here: https://yoursunny.com/t/2022/lxc-vaapi/ where somebody is doing some mappings to resolve that but all other guides seem to work without that mapping. I am struggling with the mapping since I already have some mapping and of course if there is a way to do it without mapping it is fine as well.

Please help me to understand if I should be concerned about the owener of /dev/dri/renderD128 being nobody/nogroup and vainfo will not work like that at all or if this is not neccessariliy the problem when vainfo does not work. At the very end I like to get frigate running in a docker in a lxc container with ffmpeg hw acceleration but I assume that it will not work if even vainfo does not work, right
 
  • Like
Reactions: majorgear
Make sure you add
--device /dev/dri:/dev/dri
to the container and redeploy as well as the above.
 
@mrcolumbo I got same issue with privileged container.

My HA in Jellyfin works properly except when I reboot the LXC. Group owner changed to postfix on /dev/dri/renderD128 only, to fix it I only make a script run at boot. Anyone else have a better/cleaner solution?


Code:
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
lxc.apparmor.profile: unconfined


Proxmox 7.2-11
lxc-pve 5.0.2
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!