HW transcoding on different hardware in cluster

barndoor101

New Member
Jan 24, 2025
4
2
3
Hi all,

I've just about finished my new cluster and I'm running into a problem.

The hardware in the cluster is 2x N100 Mini pcs running most stuff as low power nodes, a NAS running a 10105T cpu, and a spare compute node with a 8500T with a P400 (Cpu will be upgraded at some stage when I find a reasonably priced 9900T).

The question is that when running a Jellyfin LXC (RIP tteck) created using the script, playback works on the N100 nodes, but not the 2 older CPUs. Is there a config needed for the LXC to use the different generations of intel gfx (and perhaps the P400) seamlessly? Or is there always going to be manual config involved?

I went through a ton of guides to setup GPU passthrough (so the IOMMU steps, driver blacklisting, doing adding the vfio stuff to modules, but im stumped. is there a way to check definitively if an LXC gpu will work properly? Or should I not be blacklisting drivers if I want to use them in an LXC?

LXC conf:
Code:
arch: amd64
cores: 2
features: mount=nfs;cifs,nesting=1
hostname: jellyfin
memory: 2048
net0: name=eth0,bridge=vmbr0,gw=192.168.1.1,hwaddr=BC:24:11:4C:4B:7F,ip=192.168.1.41/24,type=veth
onboot: 1
ostype: ubuntu
rootfs: Ceph-Storage:vm-123-disk-0,size=12G
swap: 512
tags: 192.168.1.41
lxc.cgroup2.devices.allow: a
lxc.cap.drop:
lxc.cgroup2.devices.allow: c 188:* rwm
lxc.cgroup2.devices.allow: c 189:* rwm
lxc.mount.entry: /dev/serial/by-id dev/serial/by-id none bind,optional,create=dir
lxc.mount.entry: /dev/ttyUSB0 dev/ttyUSB0 none bind,optional,create=file
lxc.mount.entry: /dev/ttyUSB1 dev/ttyUSB1 none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM0 dev/ttyACM0 none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM1 dev/ttyACM1 none bind,optional,create=file
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.cgroup2.devices.allow: c 29:0 rwm
lxc.mount.entry: /dev/fb0 dev/fb0 none bind,optional,create=file
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file

Cheers for any pointers!
 
Hello!

I have not worked with transcoding on LXCs but have on the VM side.

I believe you will always have to do some manual configuration here if you wish to use hardware acceleration, as Jellyfin requires you to set the type of transcoding you are using.
If you include the P400, you will need to change it to NVENC, but this will break when you move it to the nodes without the P400.

If your other nodes do not all support the same type of hardware acceleration, you must modify the Jellyfin configuration each time you move the LXC between nodes or turn any acceleration off to burden the CPU with the full load.

You could try setting up remote transcoding, but that will still leave you with a box in the same situation.

Thanks!
 
Last edited:
Hi,

Many thanks for the response - yeah its a bit weird, all the docs are setup for VM passthrough not containers which is annoying. I set the acceleration on the Jellyfin instance to be QSV, but still no dice. I'll try VAAPI see if that works.

I was curious about the switching of hardware because they all pretty much support the same QSV profiles, I'd go for the lowest common denominator if needed but they should all do at least h265 in hardware, even the 8th gen chips which is why i was hopeing they would migrate without having to redo the hardware assignments in the conf.

The remote transcode might be an idea as that compute box will always be on and can always be a dedicated transcode target, I'll have a think.
 
  • Like
Reactions: Weehooey-HSS
So I managed to sort it. Turns out that there is still CPU usage even when doing QSV transcode - I got confused and assumed it wasnt working properly.

Also, you can't do any of the driver blacklisting or other steps in the host otherwise the LXC won't be able to use the device (its now obvious to me lol). Perhaps this can be clarified in the docs?

It was also doing audio transcode and muxing on the cpu hence the usage. Turns out all the nodes use the same render mount so it can indeed move betwwen nodes, just as long as the settings in Jellyfin are set to the lowest profile (HEVC in this case).
 
Last edited:
  • Like
Reactions: Weehooey-HSS