Took me two days to get it working but it was well worth the effort. Thought I'd share as I see this question asked often.
Set up the LXC
• Use Debian 12, update and upgrade, install curl:
Install Jellyfin
• Use the official install script:
Set up the shares
• In the node's shell, create a mount point:
• NFS only: mounting is as easy as adding this to /etc/fstab:
(NFS permissions will be managed by the source (ie your NAS). SMB is a little trickier with permissions.)
• SMB only: enter the LXC console and type the following:
• Note down the UID and GID, then add 100000 to them. The way a PVE host links to an unprivileged LXC is by adding 100000 to the ID. This is how we'll pass ownership permissions to the LXC.
• Now return to the host's shell
• SMB only: create a credentials file:
• SMB only: add your SMB credentials to this file in the following format:
• SMB only - Method 1: add the following to your host's /etc/fstab using the Jellyfin UID/GID + 100000 from earlier:
• SMB only - Method 2: This will just give full permissions to every user & group. Probably the less headachey way of doing things and will allow multiple services to have access instead of just Jellyfin. It'll also let you use the same mount points across LXCs. Just be careful who gets access:
(Both methods work, but I'll be honest I'm no expert with permissions stuff. If anyone knows a better way, feel free to let me know.)
• Reload the system and mount:
• Edit the LXC conf file (/etc/pve/lxc/xxx.conf) to set bind mounts. mp= should point to wherever you want to mount it on your LXC:
• Start/restart your LXC. You should now see the mount points and have the correct permissions.
Set up the Intel iGPU passthrough using QSV
• Open the LXC's console and find the render GID, then add 100000 to it:
• Now open the node's shell and find the device info. (This is typically renderD128 with ID 226, 128):
• add the following to the LXC conf file (/etc/pve/lxc/xxx.conf):
• In extreme layman's terms: the first line is for render passthrough, second mounts the hardware device, third passes ownership permissions. Make sure to use the render GID + 100000 from earlier. Leave the UID as 100000 (0 + 100000 = root on the LXC)
• In your LXC's console, add the Jellyfin account to the render group:
• Install the Intel openCL runtime
• Reboot the LXC
• Done! Now you have both a remote network share and iGPU passed through using QSV to an unprivileged container. Don't forget to enable and configure the transcoding settings in Jellyfin!
Testing & Troubleshooting
• To check supported codecs, type into your LXC's console:
• To check the status of the openCL runtime:
• To view if transcoding is working, open your host's shell and install the Intel GPU tools:
• Now play something that requires transcoding and type the following:
• If everything is working, you should see the render and video bars being heavily used. Also check the summary page of your Jellyfin LXC and you should see very little CPU usage. This indicates HW transcoding with the iGPU is working.
Set up the LXC
• Use Debian 12, update and upgrade, install curl:
Bash:
apt update -y && apt upgrade -y
apt install curl
Install Jellyfin
• Use the official install script:
Bash:
curl https://repo.jellyfin.org/install-debuntu.sh | bash
Set up the shares
• In the node's shell, create a mount point:
Bash:
mkdir /mnt/movies
mkdir /mnt/shows
• NFS only: mounting is as easy as adding this to /etc/fstab:
10.0.0.35:/media/videos/movies/ /mnt/movies nfs defaults 0 0
10.0.0.35:/media/videos/shows/ /mnt/shows nfs defaults 0 0
(NFS permissions will be managed by the source (ie your NAS). SMB is a little trickier with permissions.)
• SMB only: enter the LXC console and type the following:
Bash:
id jellyfin
• Note down the UID and GID, then add 100000 to them. The way a PVE host links to an unprivileged LXC is by adding 100000 to the ID. This is how we'll pass ownership permissions to the LXC.
• Now return to the host's shell
• SMB only: create a credentials file:
Bash:
nano /.smbcred
• SMB only: add your SMB credentials to this file in the following format:
username=[your smb share username]
password=[the password for that smb user]
domain=[typically just WORKGROUP]
• SMB only - Method 1: add the following to your host's /etc/fstab using the Jellyfin UID/GID + 100000 from earlier:
//10.0.0.35/media/videos/movies/ /mnt/movies cifs credentials=/.smbcred,x-systemd.automount,iocharset=utf8,rw,uid=100102,gid=100118,vers=3 0 0
//10.0.0.35/media/videos/shows/ /mnt/shows cifs credentials=/.smbcred,x-systemd.automount,iocharset=utf8,rw,uid=100102,gid=100118,vers=3 0 0
• SMB only - Method 2: This will just give full permissions to every user & group. Probably the less headachey way of doing things and will allow multiple services to have access instead of just Jellyfin. It'll also let you use the same mount points across LXCs. Just be careful who gets access:
//10.0.0.35/media/videos/movies/ /mnt/movies cifs credentials=/.smbcred,x-systemd.automount,iocharset=utf8,rw,file_mode=0777,dir_mode=0777,vers=3 0 0
//10.0.0.35/media/videos/shows/ /mnt/shows cifs credentials=/.smbcred,x-systemd.automount,iocharset=utf8,rw,file_mode=0777,dir_mode=0777,vers=3 0 0
(Both methods work, but I'll be honest I'm no expert with permissions stuff. If anyone knows a better way, feel free to let me know.)
• Reload the system and mount:
Bash:
systemctl daemon-reload
mount -a
• Edit the LXC conf file (/etc/pve/lxc/xxx.conf) to set bind mounts. mp= should point to wherever you want to mount it on your LXC:
mp0: /mnt/movies/,mp=/movies
mp1: /mnt/shows/,mp=/shows
• Start/restart your LXC. You should now see the mount points and have the correct permissions.
Set up the Intel iGPU passthrough using QSV
• Open the LXC's console and find the render GID, then add 100000 to it:
Bash:
cat /etc/group
• Now open the node's shell and find the device info. (This is typically renderD128 with ID 226, 128):
Bash:
ls -l /dev/dri
• add the following to the LXC conf file (/etc/pve/lxc/xxx.conf):
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
lxc.hook.pre-start: sh -c "chown 100000:100154 /dev/dri/renderD128"
• In extreme layman's terms: the first line is for render passthrough, second mounts the hardware device, third passes ownership permissions. Make sure to use the render GID + 100000 from earlier. Leave the UID as 100000 (0 + 100000 = root on the LXC)
• In your LXC's console, add the Jellyfin account to the render group:
Bash:
usermod -aG render jellyfin
• Install the Intel openCL runtime
Bash:
apt install -y intel-opencl-icd
• Reboot the LXC
• Done! Now you have both a remote network share and iGPU passed through using QSV to an unprivileged container. Don't forget to enable and configure the transcoding settings in Jellyfin!
Testing & Troubleshooting
• To check supported codecs, type into your LXC's console:
Bash:
/usr/lib/jellyfin-ffmpeg/vainfo --display drm --device /dev/dri/renderD128
• To check the status of the openCL runtime:
Bash:
/usr/lib/jellyfin-ffmpeg/ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 -init_hw_device opencl@va
• To view if transcoding is working, open your host's shell and install the Intel GPU tools:
Bash:
apt install -y intel-gpu-tools
• Now play something that requires transcoding and type the following:
Bash:
intel_gpu_top
• If everything is working, you should see the render and video bars being heavily used. Also check the summary page of your Jellyfin LXC and you should see very little CPU usage. This indicates HW transcoding with the iGPU is working.
Last edited: