[TUTORIAL] Jellyfin LXC with Nvidia GPU transcoding and network storage

LordRatner

Member
Jun 20, 2022
50
13
8
Hola. I struggled through this one recently and figured I'd share.

The Overview:

We're looking for an unprivileged LXC to serve as a Jellyfin server. We want to use an Nvidia GPU (in this case a GTX 1070Ti) that is also used by other LXCs for other services. The sharing of the GPU is why we are not considering using VMs. Additionally, this setup uses an NFS share for the video library, which has a couple other requirements.


Phase one: Installing Jellyfin
  1. We'll start by creating a Jellyfin LXC using one of the TTECK scripts available here: https://tteck.github.io/Proxmox/
    • Follow the instructions using the host shell in the Proxmox GUI
    • I recommend using the Advanced options, so you know what you're getting. I use:
      • 2 or 4 cores
      • 4096M RAM (two gb should suffice)
      • 16G Disk (this will need to be much larger if you plan on many simultaneous transcodes until Jellyfin 10.9 comes out with live transcode clearing)
      • Debian 12 (Ubuntu is fine if that's your preference)
      • Static IP address
      • IP6 Disabled
      • Root SSH enabled - You'll need this if you are rebuilding Jellyfin and want to save all your previous data and settings.
  2. Once created, open the console and stop the jellyfin service systemctl stop jellyfin
  3. (Optional - Transfer old settings) If you are transferring from a previous install, you need to transfer the /var/lib/jellyfindirectory (ignoring the transcodes folder) from the old server to the new one.
    • WinSCP works well for this, if you remembered to enable root SSH access. If you forgot, open /etc/ssh/sshd_config and change the PermitRootLogin option to 'yes' and uncomment it if applicable. systemctl restart sshd to make it active.
    • Transfer the directories
    • In the new Jellyfin LXC, navigate to /var/lib/jellyfin and fix the ownership with chown -R jellyfin:jellyfin *
  4. (Optional - Shared Media folder) Create the mount folder where your shared media library is accessed on the Jellyfin LXC. In this example: mkdir /mnt/theater
  5. Shutdown the LXC
  6. (Optional - Shared Media folder) We need to give the LXC access to your media library. I'm using NFS to do this, so you'll need to make sure whatever server is hosting your media is set up to host NFS shares. We are going to set up the NFS client access on the proxmox host, not the LXC. This will make migrating and snapshotting a bit easier. You can use CIFS, but you'll have to make sure the permissions and user/group settings allow for read and write access.
    • On all of your proxmox nodes, create a folder to mount to. In this example we are going to use /mnt/lxc-share. It must be the same on all nodes.
    • Mount the folder to your shared media using the method of your choosing. Test to make sure your files are visible, and that you can add and edit files from the proxmox host. In my case NFS via autofs creates /mnt/lxc-share/theater
    • In the proxmox host, open the LXC configuration file (using LXC 140 in this example) nano /etc/pve/lxc/140.conf
    • Mount the media folder into the LXC using this line at the end of your configuration: lxc.mount.entry: /mnt/lxc-share/theater mnt/theater none bind 0 0
      • We are using this mounting method because it allows for LXC snapshots.
      • By mounting the NFS share on the Proxmox host instead of the actual LXC, you have fewer share clients to create and setup. One connection per node, then as many LXCs as you want can access the share using the configuration entry.
  7. Start the LXC and confirm.
    • In the LXC console, navigate to your mounted media drive (Example: /mnt/theater) and confirm that you can see your media and add/edit files.
    • systemctl start jellyfin Then check to see you are able to navigate to and access jellyfin. If this is a fresh install, you can start setting everything up through Jellyfin, or wait till after we've added the GPU. If this is a transfer, you should be able to log in using your old credentials, but it will ask you to set the admin password. Note, LDAP settings will transfer and work successfully.
  8. Take a snapshot of the LXC. In this example we are calling the snapshot "post-install."
  9. Shutdown the LXC
  10. Did you take a snapshot?
  11. Seriously, make sure you took a snapshot.

Phase Two: Adding the GPU.

This part is a pain. The problem is that Jellyfin requires two libraries, libnvcuvid1 and libnvidia-encode1, to work. You can read the documentation here: https://jellyfin.org/docs/general/administration/hardware-acceleration/nvidia#linux-setups

But installing these libraries forces whatever Nvidia drivers your current repositories consider current. This is a problem because the drivers you use on your host must match the drivers you use in the LXC. You have to install the GPU on the host first to be able to install it in the LXC, but you might not end up with the same drivers, so you end up having to go through this process. Do not start doing this yet, just read the bullets below to get an idea of the process:

  • Install drivers on the host. There is a whole world of how-tos explaining how to install an Nvidia GPU on Proxmox. Get to the point where you can run nvidia-smi and see your card listed. I strongly recommend using this process for selecting the drivers: https://github.com/keylase/nvidia-patch#step-by-step-guide. Just the drivers for now, we will address the patch later. Pick some relatively newer drivers.
  • Make sure you snapshotted your Jellyfin LXC! Install the two libraries on the LXC following the Jellyfin Instructions: https://jellyfin.org/docs/general/administration/hardware-acceleration/nvidia#linux-setups This will download the drivers identified as required packages, and you will be able to see what drivers you need to install on the host. Make note of the driver version.
  • Rollback the LXC to the snapshot, removing the drivers and packages.
  • Back on the host, uninstall the previous drivers ./NVIDIA-Linux-x86_64-430.50.run --uninstall (edit for the version you have) and go through the directions for downloading and installing the needed version.
  • Repeat the steps from the jellyfin instructions to install the two libraries.
  • Test to make sure the card is transcoding.
Got it? It sucks, I know, but if you can figure out how to install the two required libraries using the Nvidia drivers of your choice, instead of the repository drivers, please let me know.

Here we go:
  1. Install the GPU on your Proxmox host(s). I won't cover this process, you can find the steps online. Use the driver download, not the repository, found here: https://github.com/keylase/nvidia-patch#step-by-step-guide . You will need to disable nuveau, and all sorts of other obnoxious steps. Google "proxmox lxc plex transcode" and you should find guides, just do the host steps, not the LXC steps.
  2. Shut down the LXC.
  3. On the host, ls -l /dev/nvidia*. We have to pass these to the LXC. Take note of the two numbers separated by a comma, after the group and before the date. You can use both numbers, or just the first number with an asterisk. You should see the following lines, and we are going to pass them all:
    • /dev/nvidia0
    • /dev/nvidiactl
    • /dev/nvidia-modset
    • /dev/nvidia-uvm
    • /dev/nvidia-uvm-tools
    • /dev/nvidia-caps/nvidia-cap1
    • /dev/nvidia-caps/nvidia-cap2
  4. To pass these to the LXC you need the following lines in the configuration file /etc/pve/lxc/140.conf. Note that you need to replace the bold/underlined numbers with the ones from your system:
    • lxc.cgroup2.devices.allow: c [U][B]195[/B][/U]:* rwm
    • lxc.cgroup2.devices.allow: c [U][B]234[/B][/U]:* rwm
    • lxc.cgroup2.devices.allow: c [U][B]235[/B][/U]:* rwm
    • lxc.cgroup2.devices.allow: c [U][B]238[/B][/U]:* rwm
    • lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
    • lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
    • lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
    • lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
    • lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
    • lxc.mount.entry: /dev/nvidia-caps/nvidia-cap1 dev/nvidia-caps/nvidia-cap1 none bind,optional,create=file
    • lxc.mount.entry: /dev/nvidia-caps/nvidia-cap2 dev/nvidia-caps/nvidia-cap2 none bind,optional,create=file
  5. Take a snapshot!! "gpu-mounted" or something like that.
  6. Start the LXC. Install the proprietary drivers as described in the Jellyfin instructions here: https://jellyfin.org/docs/general/administration/hardware-acceleration/nvidia#linux-setups
  7. Now roll back to the "gpu-mounted" snapshot once you noted the correct driver version.
  8. On the host, uninstall the drivers from step one using the same command you used to install them, but add the --uninstall flag.
  9. Download the correct drivers from the site in step one, and install them. Make sure the GPU registers when you run nvidia-smi
  10. On the LXC, repeat step 6 again.
  11. Reboot the LXC
  12. You'll be able to run nvidia-smi from the LXC, though you won't see any processes (these only show up on the host).

At this point you should be able to transcode something using the GPU. Set up Nvidia transcoding in the Jellyfin settings, start a movie, and change the resolution to something in the 480 range just force a transcode. Now, on the host, run nvidia-smi and you should see the transcoding process listed. It'll have "jellyfin-ffmpeg" in the process name. Open up a second window and start another movie simultaneously, drop the resolution, and run nvidia-smi on the host. You should now see two processes indicating the second transcode is working.

The final step: Unlimited encoding streams

Nvidia has a cap on their consumer cards. This can be easily removed using this repo: https://github.com/keylase/nvidia-patch

This needs to be done on both the host and the LXC

  1. Go to /opt/nvidia which might already exist
  2. wget https://raw.githubusercontent.com/keylase/nvidia-patch/master/patch.sh
  3. bash ./patch.sh
Now you should be able to open many instances of jellyfin and have them all transcoding at the same time. Running nvidia-smi on the host will show you how many transcoding streams are actually on the GPU. If you can get to 6 you know you applied the patch correctly.

Good luck!
 
Last edited:
Got it? It sucks, I know, but if you can figure out how to install the two required libraries using the Nvidia drivers of your choice, instead of the repository drivers, please let me know.

This does indeed suck, and as soon as I realized I went looking and found this post.

I will be attempting this tomorrow and I'll update with my success or not.
 
I found @LordRatner post very useful and I have now Jellyfin transcoding properly in an unprivileged Proxmox LXC.
But I think we do not need to complicate so much.
The key idea is to install the same version of Nvidia drivers in the host and in the LXC, right?

We can find the right version by running a "simulated" installation of the libnvcuvid1 libnvidia-encode1 libraries using the following command in the jellyfin LXC:
Code:
apt install --simulate libnvcuvid1 libnvidia-encode1
But before that we need to update the /etc/apt/sources.list by adding non-free non-free-firmware at the end of each line and then running apt update!

After knowing the correct version (for instance 525.147.05) we can proceed with the normal drivers installation on the host:
Code:
mkdir nvidia && cd nvidia
wget https://international.download.nvidia.com/XFree86/Linux-x86_64/525.147.05/NVIDIA-Linux-x86_64-525.147.05.run
chmod +x ./NVIDIA-Linux-x86_64-525.147.05.run
./NVIDIA-Linux-x86_64-525.147.05.run

By the way, after that I had to do make sure the drivers are loaded on boot by adding the following lines to /etc/modules-load.d/modules.conf:
Code:
nvidia
nvidia_uvm
And running update-initramfs -u -k allafter that.

I had also to create the required device files for the Nvidia driver by creating file /etc/udev/rules.d/70-nvidia.rules with the following:
Code:
KERNEL=="nvidia", RUN+="/bin/bash -c '/usr/bin/nvidia-smi -L && /bin/chmod 666 /dev/nvidia*'"
KERNEL=="nvidia_uvm", RUN+="/bin/bash -c '/usr/bin/nvidia-modprobe -c0 -u && /bin/chmod 0666 /dev/nvidia-uvm*'"
For some reason, without that, the dev files were not created automatically!

After a reboot you can finally check if the drivers are properly installed (and with the right version) by running the nvidia-smi command.
 
Last edited:
To anyone who comes across this: I found a simpler way without having to reinstall the drivers on the host and container.

1. Install the correct drive through the .run executable from Nvidia's website on your host machine (https://www.nvidia.com/download/index.aspx). Run nvidia-smi to confirm installation.

2. Pass through the /dev/ devices, which can now be done by going to a container > Resources > Add > Device Passthrough (I believe this was added in PVE 8).

3. Boot the LXC and run the same install script but with the --no-kernel-module flag.
Example:./NVIDIA-Linux-x86_64-550.78.run --no-kernel-module

This should install the tools you need while still working with the hosts kernel drivers
Running nvidia-smi should now work on the LXC too.

As noted by @Helio Mendonça, adding to the host

nvidia
nvidia_uvm

to /etc/modules-load.d/modules.conf and

KERNEL=="nvidia", RUN+="/bin/bash -c '/usr/bin/nvidia-smi -L && /bin/chmod 666 /dev/nvidia*'"
KERNEL=="nvidia_uvm", RUN+="/bin/bash -c '/usr/bin/nvidia-modprobe -c0 -u && /bin/chmod 0666 /dev/nvidia-uvm*'"

to /etc/udev/rules.d/70-nvidia.rules
and running update-initramfs -u -k all
fixes the drivers not running on server reboot.
 
  • Like
Reactions: iEngineered
@andrewski4 Thank you so much! This worked for me right away.

My last remaining issue with passthrough to an LXC is that the nvidia driver keeps my GTX1060 in P0 forever.
This powerstate pulls a constant 23W which has quiet an impact.

Any idea how to deal with this?
 
@andrewski4 Thank you so much! This worked for me right away.

My last remaining issue with passthrough to an LXC is that the nvidia driver keeps my GTX1060 in P0 forever.
This powerstate pulls a constant 23W which has quiet an impact.

Any idea how to deal with this?
Have you found a workaround for this ? I have the same issue, my GTX1060 6GB draws ~30W at idle in P0 state
 
Have you found a workaround for this ? I have the same issue, my GTX1060 6GB draws ~30W at idle in P0 state
The workaround that I ended up with is to just use a Debian VM for Jellyfin with passthrough for the GPU. It idles in P8 with about 5-8 Watts.
I just keep the VM up 24/7 as the power consumption is lower than without it running.
 
  • Like
Reactions: FlawTECH

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!