[Guide] Jellyfin + remote network shares + HW transcoding with Intel's QSV + unprivileged LXC

You're right. I made a mistake - I tested it out last weekend and had the same result as you with NFS. I am almost certain I lost data this way but it seems less plausible now. Perhaps it was for local bind mounts? I should really test again with local bind mounts to confirm. Sorry for the misinformation.
 
Last edited:
You're right. I made a mistake - I tested it out last weekend and had the same result as you with NFS. I am almost certain I lost data this way but it seems less plausible now. Perhaps it was for local bind mounts? I should really test again with local bind mounts to confirm. Sorry for the misinformation.
No problem, and thank you for the clarification. I can't say what went wrong on your end. I was just surprised that the mount point wasn't deleted during my tests. Thank you again for the clarification.
 
I am hoping someone can help me figure out why transcoding isn’t working for me.

I followed this comment

I have just set-up Jellyfin on my N100 machine with Proxmox 8.2 and I found that setting up GPU passthrough is much simpler.

There is no need editing the config or user groups. All thats needed is to go to container's Resources tab in the web UI and add a Device Passthrough.
There, put the /dev/dri/renderD128 and fill in the GID of render group. Jellyfin adds its user account to render group automatically during installation, so it should work out of the box.
View attachment 71698



I had an issue with the passthrough not working initially, but it was due to GPU driver mismatch. I updated both host & container with the latest driver from https://github.com/intel/compute-runtime/releases and then it worked perfectly.

So basically all thats needed:
1) Setup passthrough as mentioned above
2) Mount NFS share (using mount point so container can stay unprivileged)
2) Install jellyfin
3) Potentially update the GPU drivers, if running
Code:
/usr/lib/jellyfin-ffmpeg/ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 -init_hw_device qsv=hw
gives errors in red about init failing.

And the output of command 3 shows no errors. I made sure that the GID in CT is the same as the render group on my host using
getent group render which showed me 103.

IMG_0727.jpeg

I am using a 10th generation intel processor and proxmox 8.3.2

The initial instructions don’t say anything about installed opencl on the host so I haven’t done so. I’m also not sure which one would need to be installed because I don’t see mention of the 10th generation in the GitHub repo.

I do have intel-media-va-driver:amd64 23.1.1+dfsg1-1 on the host.

The ffmpeg logs don’t really tell me much

Code:
ffmpeg version 7.0.2-Jellyfin Copyright (c) 2000-2024 the FFmpeg developers
  built with gcc 12 (Debian 12.2.0-14)
  configuration: --prefix=/usr/lib/jellyfin-ffmpeg --target-os=linux --extra-version=Jellyfin --disable-doc --disable-ffplay --disable-ptx-compression --disable-static --disable-libxcb --disable-sdl2 --disable-xlib --enable-lto=auto --enable-gpl --enable-version3 --enable-shared --enable-gmp --enable-gnutls --enable-chromaprint --enable-opencl --enable-libdrm --enable-libxml2 --enable-libass --enable-libfreetype --enable-libfribidi --enable-libfontconfig --enable-libharfbuzz --enable-libbluray --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libopenmpt --enable-libdav1d --enable-libsvtav1 --enable-libwebp --enable-libvpx --enable-libx264 --enable-libx265 --enable-libzvbi --enable-libzimg --enable-libfdk-aac --arch=amd64 --enable-libshaderc --enable-libplacebo --enable-vulkan --enable-vaapi --enable-amf --enable-libvpl --enable-ffnvcodec --enable-cuda --enable-cuda-llvm --enable-cuvid --enable-nvdec --enable-nvenc
  libavutil      59.  8.100 / 59.  8.100
  libavcodec     61.  3.100 / 61.  3.100
  libavformat    61.  1.100 / 61.  1.100
  libavdevice    61.  1.100 / 61.  1.100
  libavfilter    10.  1.100 / 10.  1.100
  libswscale      8.  1.100 /  8.  1.100
  libswresample   5.  1.100 /  5.  1.100
  libpostproc    58.  1.100 / 58.  1.100
Device creation failed: -542398533.
Failed to set value 'vaapi=va:,vendor_id=0x8086,driver=iHD' for option 'init_hw_device': Generic error in an external library
Error parsing global options: Generic error in an external library

These are my transcoding settings. I tried different variants without any success.


IMG_0729.jpeg

And finally the output of /usr/lib/jellyfin-ffmpeg/ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 -init_hw_device qsv=hw

IMG_0731.jpeg
 
I am hoping someone can help me figure out why transcoding isn’t working for me.

I followed this comment



And the output of command 3 shows no errors. I made sure that the GID in CT is the same as the render group on my host using
getent group render which showed me 103.

View attachment 80579

I am using a 10th generation intel processor and proxmox 8.3.2

The initial instructions don’t say anything about installed opencl on the host so I haven’t done so. I’m also not sure which one would need to be installed because I don’t see mention of the 10th generation in the GitHub repo.

I do have intel-media-va-driver:amd64 23.1.1+dfsg1-1 on the host.

The ffmpeg logs don’t really tell me much

Code:
ffmpeg version 7.0.2-Jellyfin Copyright (c) 2000-2024 the FFmpeg developers
  built with gcc 12 (Debian 12.2.0-14)
  configuration: --prefix=/usr/lib/jellyfin-ffmpeg --target-os=linux --extra-version=Jellyfin --disable-doc --disable-ffplay --disable-ptx-compression --disable-static --disable-libxcb --disable-sdl2 --disable-xlib --enable-lto=auto --enable-gpl --enable-version3 --enable-shared --enable-gmp --enable-gnutls --enable-chromaprint --enable-opencl --enable-libdrm --enable-libxml2 --enable-libass --enable-libfreetype --enable-libfribidi --enable-libfontconfig --enable-libharfbuzz --enable-libbluray --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libopenmpt --enable-libdav1d --enable-libsvtav1 --enable-libwebp --enable-libvpx --enable-libx264 --enable-libx265 --enable-libzvbi --enable-libzimg --enable-libfdk-aac --arch=amd64 --enable-libshaderc --enable-libplacebo --enable-vulkan --enable-vaapi --enable-amf --enable-libvpl --enable-ffnvcodec --enable-cuda --enable-cuda-llvm --enable-cuvid --enable-nvdec --enable-nvenc
  libavutil      59.  8.100 / 59.  8.100
  libavcodec     61.  3.100 / 61.  3.100
  libavformat    61.  1.100 / 61.  1.100
  libavdevice    61.  1.100 / 61.  1.100
  libavfilter    10.  1.100 / 10.  1.100
  libswscale      8.  1.100 /  8.  1.100
  libswresample   5.  1.100 /  5.  1.100
  libpostproc    58.  1.100 / 58.  1.100
Device creation failed: -542398533.
Failed to set value 'vaapi=va:,vendor_id=0x8086,driver=iHD' for option 'init_hw_device': Generic error in an external library
Error parsing global options: Generic error in an external library

These are my transcoding settings. I tried different variants without any success.


View attachment 80580

And finally the output of /usr/lib/jellyfin-ffmpeg/ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 -init_hw_device qsv=hw

View attachment 80581
You have to Put "/dev/dri/renderD128" under QSV Device in jellyfin settings. In your Screenshot it's empty.
 
Thanks for the suggestion. I did get a different error

Code:
ffmpeg version 7.0.2-Jellyfin Copyright (c) 2000-2024 the FFmpeg developers
  built with gcc 12 (Debian 12.2.0-14)
  configuration: --prefix=/usr/lib/jellyfin-ffmpeg --target-os=linux --extra-version=Jellyfin --disable-doc --disable-ffplay --disable-ptx-compression --disable-static --disable-libxcb --disable-sdl2 --disable-xlib --enable-lto=auto --enable-gpl --enable-version3 --enable-shared --enable-gmp --enable-gnutls --enable-chromaprint --enable-opencl --enable-libdrm --enable-libxml2 --enable-libass --enable-libfreetype --enable-libfribidi --enable-libfontconfig --enable-libharfbuzz --enable-libbluray --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libopenmpt --enable-libdav1d --enable-libsvtav1 --enable-libwebp --enable-libvpx --enable-libx264 --enable-libx265 --enable-libzvbi --enable-libzimg --enable-libfdk-aac --arch=amd64 --enable-libshaderc --enable-libplacebo --enable-vulkan --enable-vaapi --enable-amf --enable-libvpl --enable-ffnvcodec --enable-cuda --enable-cuda-llvm --enable-cuvid --enable-nvdec --enable-nvenc
  libavutil      59.  8.100 / 59.  8.100
  libavcodec     61.  3.100 / 61.  3.100
  libavformat    61.  1.100 / 61.  1.100
  libavdevice    61.  1.100 / 61.  1.100
  libavfilter    10.  1.100 / 10.  1.100
  libswscale      8.  1.100 /  8.  1.100
  libswresample   5.  1.100 /  5.  1.100
  libpostproc    58.  1.100 / 58.  1.100
[AVHWDeviceContext @ 0x5da171b893c0] No VA display found for device /dev/dri/renderD128.
Device creation failed: -22.
Failed to set value 'vaapi=va:/dev/dri/renderD128,driver=iHD' for option 'init_hw_device': Invalid argument
Error parsing global options: Invalid argument

After more debugging, I realized that the render device in my container was owned by the KVM group.

I’m not sure if this is related to me, passing through the device on the GUI, but ultimately, I added the jellyfish user to the KVM group, and this allowed transcoding to work, so my issue is resolved
 
Took me two days to get it working but it was well worth the effort. Thought I'd share as I see this question asked often.




Set up the LXC

• Use Debian 12, update and upgrade, install curl:

Bash:
apt update -y && apt upgrade -y
apt install curl








Install Jellyfin

• Use the official install script:

Bash:
curl https://repo.jellyfin.org/install-debuntu.sh | bash








Set up the shares

• In the node's shell, create a mount point:

Bash:
mkdir /mnt/movies
mkdir /mnt/shows

NFS only: mounting is as easy as adding this to /etc/fstab:



(NFS permissions will be managed by the source (ie your NAS). SMB is a little trickier with permissions.)

SMB only: enter the LXC console and type the following:

Bash:
id jellyfin

• Note down the UID and GID, then add 100000 to them. The way a PVE host links to an unprivileged LXC is by adding 100000 to the ID. This is how we'll pass ownership permissions to the LXC.

• Now return to the host's shell

SMB only: create a credentials file:

Bash:
nano /.smbcred

SMB only: add your SMB credentials to this file in the following format:



SMB only - Method 1: add the following to your host's /etc/fstab using the Jellyfin UID/GID + 100000 from earlier:




SMB only - Method 2: This will just give full permissions to every user & group. Probably the less headachey way of doing things and will allow multiple services to have access instead of just Jellyfin. It'll also let you use the same mount points across LXCs. Just be careful who gets access:



(Both methods work, but I'll be honest I'm no expert with permissions stuff. If anyone knows a better way, feel free to let me know.)


• Reload the system and mount:

Bash:
systemctl daemon-reload
mount -a

• Edit the LXC conf file (/etc/pve/lxc/xxx.conf) to set bind mounts. mp= should point to wherever you want to mount it on your LXC:



• Start/restart your LXC. You should now see the mount points and have the correct permissions.








Set up the Intel iGPU passthrough using QSV

• Open the LXC's console and find the render GID, then add 100000 to it:

Bash:
cat /etc/group

• Now open the node's shell and find the device info. (This is typically renderD128 with ID 226, 128):

Bash:
ls -l /dev/dri

• add the following to the LXC conf file (/etc/pve/lxc/xxx.conf):



• In extreme layman's terms: the first line is for render passthrough, second mounts the hardware device, third passes ownership permissions. Make sure to use the render GID + 100000 from earlier. Leave the UID as 100000 (0 + 100000 = root on the LXC)

• In your LXC's console, add the Jellyfin account to the render group:

Bash:
usermod -aG render jellyfin

• Install the Intel openCL runtime

Bash:
apt install -y intel-opencl-icd

• Reboot the LXC

• Done! Now you have both a remote network share and iGPU passed through using QSV to an unprivileged container. Don't forget to enable and configure the transcoding settings in Jellyfin!








Testing & Troubleshooting

• To check supported codecs, type into your LXC's console:

Bash:
/usr/lib/jellyfin-ffmpeg/vainfo --display drm --device /dev/dri/renderD128

• To check the status of the openCL runtime:

Bash:
/usr/lib/jellyfin-ffmpeg/ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 -init_hw_device opencl@va

• To view if transcoding is working, open your host's shell and install the Intel GPU tools:

Bash:
apt install -y intel-gpu-tools

• Now play something that requires transcoding and type the following:

Bash:
intel_gpu_top

• If everything is working, you should see the render and video bars being heavily used. Also check the summary page of your Jellyfin LXC and you should see very little CPU usage. This indicates HW transcoding with the iGPU is working.

I followed the NFS setup and get:

Code:
drwxrwxrwx   1 nobody nogroup   448 Mar  4 15:39 movies

Is one of the sections marked as "SMB only" maybe wrongly tagged?
 
I just rebuilt my Jellyfin LXC using the Alpine base image on a Miniforums UM790

Processor: AMD Ryzen 9 7940HS w/ Radeon 780M Graphics

I am posting my notes here in case they are any use. I've moved all my Debian LXCs to Alpine, they are a fraction of the size. Using a pared-back base image has taught me more about Linux in general too, there's less noise, what's there is what's there.

Assuming a working Jellyfin instance in LXCL:

1. Proxmox render and video groups

Check the render and video group numbers on your Proxmox host:

Bash:
root@proxmox ~$ cat /etc/group | grep -E 'render|video'
video:x:44:
render:x:104:

Here render is 104 and video is 44. Add these lines to /etc/subgid using your actual group numbers:

Bash:
root@proxmox ~$ cat /etc/subgid
root:100000:65536
# add the next two lines
root:44:1
root:104:

2. Alpine LXC render and video groups

In your Alpine LXC create the group render and add users root and jellyfin to both video and render:

Bash:
root@jellyfin ~$   addgroup -g 104 render
root@jellyfin ~$   addgroup root render
root@jellyfin ~$   addgroup root video
root@jellyfin ~$   addgroup jellyfin render
root@jellyfin ~$   addgroup jellyfin video
root@jellyfin ~$   cat /etc/group | grep -E 'render|video' # sample output below
video:x:27:root
render:x:104:root

3. Apply group mappings in LXC config file

Now you need to add the mappings to the LXC config file in etc/pve/lxc/*.conf:

Bash:
root@proxmox ~$ cat /etc/pve/lxc/532.conf
arch: amd64
cores: 8
# ...
# ...
# ...
# igpu passthrough
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
lxc.idmap: u 0 100000 65536     # map UID root to root
lxc.idmap: g 0 100000 27        # map groups up to 27 as normal
lxc.idmap: g 27 44 1            # map group 27 guest to 44 host
lxc.idmap: g 104 104 1          # map 104 to 104
lxc.idmap: g 105 100105 65431   # map from 105 until 65535 (65535 - 105 = 65431)

I got a little stuck on these, I was missing the 0-27 line. This caused Jellyfin to not have the right folder permissions.

4. Drivers in LXC

I didn't need to go off grid here, everything is available in the official apk repo:

Bash:
root@jellyfin ~$   apk add --no-cache mesa-va-gallium mesa-dri-gallium libva-utils

To check the drivers are working run this command::

Bash:
root@jellyfin ~$   /usr/lib/jellyfin-ffmpeg/ffmpeg -v debug -init_hw_device drm=dr:/dev/dri/renderD128
Splitting the commandline.
Reading option '-v' ... matched as option 'v' (set logging level) with argument 'debug'.
Reading option '-init_hw_device' ... matched as option 'init_hw_device' (initialise hardware device) with argument 'drm=dr:/dev/dri/renderD128'.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option v (set logging level) with argument debug.
Applying option init_hw_device (initialise hardware device) with argument drm=dr:/dev/dri/renderD128.
[AVHWDeviceContext @ 0x7cf2510e6ec0] Opened DRM device /dev/dri/renderD128: driver amdgpu version 3.57.0.
Successfully parsed a group of options.

Check your supported codecs with vainfo:

Bash:
    vainfo: Supported profile and entrypoints
          VAProfileH264ConstrainedBaseline: VAEntrypointVLD
          VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
          VAProfileH264Main               : VAEntrypointVLD
          VAProfileH264Main               : VAEntrypointEncSlice
          VAProfileH264High               : VAEntrypointVLD
          VAProfileH264High               : VAEntrypointEncSlice
          VAProfileHEVCMain               : VAEntrypointVLD
          VAProfileHEVCMain               : VAEntrypointEncSlice
          VAProfileHEVCMain10             : VAEntrypointVLD
          VAProfileHEVCMain10             : VAEntrypointEncSlice
          VAProfileJPEGBaseline           : VAEntrypointVLD
          VAProfileVP9Profile0            : VAEntrypointVLD
          VAProfileVP9Profile2            : VAEntrypointVLD
          VAProfileAV1Profile0            : VAEntrypointVLD
          VAProfileAV1Profile0            : VAEntrypointEncSlice
          VAProfileNone                   : VAEntrypointVideoProc

5. Set up hardware encoding in the Jellyfin dashboard

Use device /dev/dri/renderD128 or whatever yours is

Double check the codecs against the ones shown in vainfo
 
Last edited:
  • Like
Reactions: skal
this is great, thank you for detailing all of this, its always helpful to have another guide for these kinds of things.

i was just thinking about taking my intel ARC A310 and using it with LXCs instead of a VM, can i allow multiple LXCs to access it or would it only work for one single LXC?

i am kind of hoping to use it with a few of them and was thinking about making a ubuntu LXC with GUI for hosting media conversion software, etc. didn't really want to mix everything into a single vm/lxc.