[Guide] Jellyfin + remote network shares + HW transcoding with Intel's QSV + unprivileged LXC

dondizzurp

New Member
Feb 29, 2024
7
6
3
Took me two days to get it working but it was well worth the effort. Thought I'd share as I see this question asked often.




Set up the LXC

• Use Debian 12, update and upgrade, install curl:

Bash:
apt update -y && apt upgrade -y
apt install curl








Install Jellyfin

• Use the official install script:

Bash:
curl https://repo.jellyfin.org/install-debuntu.sh | bash








Set up the shares

• In the node's shell, create a mount point:

Bash:
mkdir /mnt/movies
mkdir /mnt/shows

NFS only: mounting is as easy as adding this to /etc/fstab:

10.0.0.35:/media/videos/movies/ /mnt/movies nfs defaults 0 0
10.0.0.35:/media/videos/shows/ /mnt/shows nfs defaults 0 0

(NFS permissions will be managed by the source (ie your NAS). SMB is a little trickier with permissions.)

SMB only: enter the LXC console and type the following:

Bash:
id jellyfin

• Note down the UID and GID, then add 100000 to them. The way a PVE host links to an unprivileged LXC is by adding 100000 to the ID. This is how we'll pass ownership permissions to the LXC.

• Now return to the host's shell

SMB only: create a credentials file:

Bash:
nano /.smbcred

SMB only: add your SMB credentials to this file in the following format:

username=[your smb share username]
password=[the password for that smb user]
domain=[typically just WORKGROUP]

SMB only - Method 1: add the following to your host's /etc/fstab using the Jellyfin UID/GID + 100000 from earlier:

//10.0.0.35/media/videos/movies/ /mnt/movies cifs credentials=/.smbcred,x-systemd.automount,iocharset=utf8,rw,uid=100102,gid=100118,vers=3 0 0
//10.0.0.35/media/videos/shows/ /mnt/shows cifs credentials=/.smbcred,x-systemd.automount,iocharset=utf8,rw,uid=100102,gid=100118,vers=3 0 0


SMB only - Method 2: This will just give full permissions to every user & group. Probably the less headachey way of doing things and will allow multiple services to have access instead of just Jellyfin. It'll also let you use the same mount points across LXCs. Just be careful who gets access:

//10.0.0.35/media/videos/movies/ /mnt/movies cifs credentials=/.smbcred,x-systemd.automount,iocharset=utf8,rw,file_mode=0777,dir_mode=0777,vers=3 0 0
//10.0.0.35/media/videos/shows/ /mnt/shows cifs credentials=/.smbcred,x-systemd.automount,iocharset=utf8,rw,file_mode=0777,dir_mode=0777,vers=3 0 0

(Both methods work, but I'll be honest I'm no expert with permissions stuff. If anyone knows a better way, feel free to let me know.)


• Reload the system and mount:

Bash:
systemctl daemon-reload
mount -a

• Edit the LXC conf file (/etc/pve/lxc/xxx.conf) to set bind mounts. mp= should point to wherever you want to mount it on your LXC:

mp0: /mnt/movies/,mp=/movies
mp1: /mnt/shows/,mp=/shows

• Start/restart your LXC. You should now see the mount points and have the correct permissions.








Set up the Intel iGPU passthrough using QSV

• Open the LXC's console and find the render GID, then add 100000 to it:

Bash:
cat /etc/group

• Now open the node's shell and find the device info. (This is typically renderD128 with ID 226, 128):

Bash:
ls -l /dev/dri

• add the following to the LXC conf file (/etc/pve/lxc/xxx.conf):

lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
lxc.hook.pre-start: sh -c "chown 100000:100154 /dev/dri/renderD128"

• In extreme layman's terms: the first line is for render passthrough, second mounts the hardware device, third passes ownership permissions. Make sure to use the render GID + 100000 from earlier. Leave the UID as 100000 (0 + 100000 = root on the LXC)

• In your LXC's console, add the Jellyfin account to the render group:

Bash:
usermod -aG render jellyfin

• Install the Intel openCL runtime

Bash:
apt install -y intel-opencl-icd

• Reboot the LXC

• Done! Now you have both a remote network share and iGPU passed through using QSV to an unprivileged container. Don't forget to enable and configure the transcoding settings in Jellyfin!








Testing & Troubleshooting

• To check supported codecs, type into your LXC's console:

Bash:
/usr/lib/jellyfin-ffmpeg/vainfo --display drm --device /dev/dri/renderD128

• To check the status of the openCL runtime:

Bash:
/usr/lib/jellyfin-ffmpeg/ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 -init_hw_device opencl@va

• To view if transcoding is working, open your host's shell and install the Intel GPU tools:

Bash:
apt install -y intel-gpu-tools

• Now play something that requires transcoding and type the following:

Bash:
intel_gpu_top

• If everything is working, you should see the render and video bars being heavily used. Also check the summary page of your Jellyfin LXC and you should see very little CPU usage. This indicates HW transcoding with the iGPU is working.
 
Last edited:
Took me two days to get it working but it was well worth the effort. Thought I'd share as I see this question asked often.




Set up the LXC

• Use Debian 12, update and upgrade, install curl:

Bash:
apt update -y && apt upgrade -y
apt install curl








Install Jellyfin

• Use the official install script:

Bash:
curl https://repo.jellyfin.org/install-debuntu.sh | bash








Set up the shares

• In the node's shell, create a mount point:

Bash:
mkdir /mnt/movies
mkdir /mnt/shows

NFS only: mounting is as easy as adding this to /etc/fstab:



(NFS permissions will be managed by the source (ie your NAS). SMB is a little trickier with permissions.)

SMB only: enter the LXC console and type the following:

Bash:
id jellyfin

• Note down the UID and GID, then add 100000 to them. The way a PVE host links to an unprivileged LXC is by adding 100000 to the ID. This is how we'll pass ownership permissions to the LXC.

• Now return to the host's shell

SMB only: create a credentials file:

Bash:
nano /.smbcred

SMB only: add your SMB credentials to this file in the following format:



SMB only - Method 1: add the following to your host's /etc/fstab using the Jellyfin UID/GID + 100000 from earlier:




SMB only - Method 2: This will just give full permissions to every user & group. Probably the less headachey way of doing things and will allow multiple services to have access instead of just Jellyfin. It'll also let you use the same mount points across LXCs. Just be careful who gets access:



(Both methods work, but I'll be honest I'm no expert with permissions stuff. If anyone knows a better way, feel free to let me know.)


• Reload the system and mount:

Bash:
systemctl daemon-reload
mount -a

• Edit the LXC conf file (/etc/pve/lxc/xxx.conf) to set bind mounts. mp= should point to wherever you want to mount it on your LXC:



• Start/restart your LXC. You should now see the mount points and have the correct permissions.








Set up the Intel iGPU passthrough using QSV

• Open the LXC's console and find the render GID, then add 100000 to it:

Bash:
cat /etc/group

• Now open the node's shell and find the device info. (This is typically renderD128 with ID 226, 128):

Bash:
ls -l /dev/dri

• add the following to the LXC conf file (/etc/pve/lxc/xxx.conf):



• In extreme layman's terms: the first line is for render passthrough, second mounts the hardware device, third passes ownership permissions. Make sure to use the render GID + 100000 from earlier. Leave the UID as 100000 (0 + 100000 = root on the LXC)

• In your LXC's console, add the Jellyfin account to the render group:

Bash:
usermod -aG render jellyfin

• Install the Intel openCL runtime

Bash:
apt install -y intel-opencl-icd

• Reboot the LXC

• Done! Now you have both a remote network share and iGPU passed through using QSV to an unprivileged container. Don't forget to enable and configure the transcoding settings in Jellyfin!








Testing & Troubleshooting

• To check supported codecs, type into your LXC's console:

Bash:
/usr/lib/jellyfin-ffmpeg/vainfo --display drm --device /dev/dri/renderD128

• To check the status of the openCL runtime:

Bash:
/usr/lib/jellyfin-ffmpeg/ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 -init_hw_device opencl@va

• To view if transcoding is working, open your host's shell and install the Intel GPU tools:

Bash:
apt install -y intel-gpu-tools

• Now play something that requires transcoding and type the following:

Bash:
intel_gpu_top

• If everything is working, you should see the render and video bars being heavily used. Also check the summary page of your Jellyfin LXC and you should see very little CPU usage. This indicates HW transcoding with the iGPU is working.

Thanks, really appreciate it.
Don't you have any information to run Jellyfin in docker in LXC container? I did everything that you described, it works, but not when the jellyfin is in docker container.
It says
Code:
[AVHWDeviceContext @ 0x64a234b9b980] Failed to get number of OpenCL platforms: -1001.
Device creation failed: -19.
Failed to set value 'opencl@va' for option 'init_hw_device': No such device
Error parsing global options: No such device
when use command to test rendering
Code:
/usr/lib/jellyfin-ffmpeg/ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 -init_hw_device opencl@va

It will be something with wrong permissions, but I'm not sure if it's solvable.

EDIT: I added env
Code:
 DOCKER_MODS:linuxserver/mods:jellyfin-opencl-intel
and now its work, but in Jellyfin it shows playback error.
I'm running out of ideas, because when try to run
Code:
intel_gpu_top
it says
Code:
Failed to initialize PMU! (Operation not permitted)
on both places, in docker and also in LXC container
 
Last edited:
Thanks, really appreciate it.
Don't you have any information to run Jellyfin in docker in LXC container? I did everything that you described, it works, but not when the jellyfin is in docker container.
It says
Code:
[AVHWDeviceContext @ 0x64a234b9b980] Failed to get number of OpenCL platforms: -1001.
Device creation failed: -19.
Failed to set value 'opencl@va' for option 'init_hw_device': No such device
Error parsing global options: No such device
when use command to test rendering
Code:
/usr/lib/jellyfin-ffmpeg/ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 -init_hw_device opencl@va

It will be something with wrong permissions, but I'm not sure if it's solvable.

EDIT: I added env
Code:
 DOCKER_MODS:linuxserver/mods:jellyfin-opencl-intel
and now its work, but in Jellyfin it shows playback error.
I'm running out of ideas, because when try to run
Code:
intel_gpu_top
it says
Code:
Failed to initialize PMU! (Operation not permitted)
on both places, in docker and also in LXC container


No clue about how to do it through docker in an LXC.

Any reason why you're using containers within containers? LXC is a sufficient enough container for Jellyfin. What's the point on adding docker complications on top of that?

Also, from what I've read about proxmox, docker works best when running through a VM rather than an LXC as the LXC shares resources with the host (or something like that.. I'll find the post later... they explained it properly).


BTW, you need to install the intel gpu tools on the HOST'S SHELL, not your LXC. Then run the command 'intel_gpu_top' IN YOUR HOST'S SHELL
 
No clue about how to do it through docker in an LXC.

Any reason why you're using containers within containers? LXC is a sufficient enough container for Jellyfin. What's the point on adding docker complications on top of that?

Also, from what I've read about proxmox, docker works best when running through a VM rather than an LXC as the LXC shares resources with the host (or something like that.. I'll find the post later... they explained it properly).


BTW, you need to install the intel gpu tools on the HOST'S SHELL, not your LXC. Then run the command 'intel_gpu_top' IN YOUR HOST'S SHELL
It's a right question why an LXC container.
It started because of the power efficiency and many tries of many experiments with low power consumption. Another reason was that you cannot use iGPU in VM. So that's it. Btw I figure this out. The problem was because of linuxserver jellyfin image where render don't work on LXC container, but official jellyfin image works ...
Pretty weird :oops:
 
  • Like
Reactions: dondizzurp
It's a right question why an LXC container.
It started because of the power efficiency and many tries of many experiments with low power consumption. Another reason was that you cannot use iGPU in VM. So that's it. Btw I figure this out. The problem was because of linuxserver jellyfin image where render don't work on LXC container, but official jellyfin image works ...
Pretty weird :oops:


The official version is a direct binary build of Jellyfin from a Debian base whereas the linuxserver version is built on Ubuntu

You can read more here if you're interested. Likely has something to do with it.
 
Last edited:
Just installed Jellyfin in LXC with passing my Vega APU from Ryzen 5700G. The only difference was to install xserver-xorg-video-amdgpu instead of Intel openCL. Thanks for the guide!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!