HW transcoding on different hardware in cluster

barndoor101

New Member
Jan 24, 2025
1
0
1
Hi all,

I've just about finished my new cluster and I'm running into a problem.

The hardware in the cluster is 2x N100 Mini pcs running most stuff as low power nodes, a NAS running a 10105T cpu, and a spare compute node with a 8500T with a P400 (Cpu will be upgraded at some stage when I find a reasonably priced 9900T).

The question is that when running a Jellyfin LXC (RIP tteck) created using the script, playback works on the N100 nodes, but not the 2 older CPUs. Is there a config needed for the LXC to use the different generations of intel gfx (and perhaps the P400) seamlessly? Or is there always going to be manual config involved?

I went through a ton of guides to setup GPU passthrough (so the IOMMU steps, driver blacklisting, doing adding the vfio stuff to modules, but im stumped. is there a way to check definitively if an LXC gpu will work properly? Or should I not be blacklisting drivers if I want to use them in an LXC?

LXC conf:
Code:
arch: amd64
cores: 2
features: mount=nfs;cifs,nesting=1
hostname: jellyfin
memory: 2048
net0: name=eth0,bridge=vmbr0,gw=192.168.1.1,hwaddr=BC:24:11:4C:4B:7F,ip=192.168.1.41/24,type=veth
onboot: 1
ostype: ubuntu
rootfs: Ceph-Storage:vm-123-disk-0,size=12G
swap: 512
tags: 192.168.1.41
lxc.cgroup2.devices.allow: a
lxc.cap.drop:
lxc.cgroup2.devices.allow: c 188:* rwm
lxc.cgroup2.devices.allow: c 189:* rwm
lxc.mount.entry: /dev/serial/by-id dev/serial/by-id none bind,optional,create=dir
lxc.mount.entry: /dev/ttyUSB0 dev/ttyUSB0 none bind,optional,create=file
lxc.mount.entry: /dev/ttyUSB1 dev/ttyUSB1 none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM0 dev/ttyACM0 none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM1 dev/ttyACM1 none bind,optional,create=file
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.cgroup2.devices.allow: c 29:0 rwm
lxc.mount.entry: /dev/fb0 dev/fb0 none bind,optional,create=file
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file

Cheers for any pointers!
 
Hello!

I have not worked with transcoding on LXCs but have on the VM side.

I believe you will always have to do some manual configuration here if you wish to use hardware acceleration, as Jellyfin requires you to set the type of transcoding you are using.
If you include the P400, you will need to change it to NVENC, but this will break when you move it to the nodes without the P400.

If your other nodes do not all support the same type of hardware acceleration, you must modify the Jellyfin configuration each time you move the LXC between nodes or turn any acceleration off to burden the CPU with the full load.

You could try setting up remote transcoding, but that will still leave you with a box in the same situation.

Thanks!
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!