GPU passthrough and transcode in Jellyfin

Tritri

New Member
Jul 7, 2025
6
0
1
Hi,

So I'm baffled. I, finally, managed to configure GPU Passthrough in my Ubuntu (24.04) VM in Proxmox. After a few sheananigans with changing the BIOS type from Bios to UEFI.

So no probleme my main VM, with Jellyfin, has (now) a working GPU (NVIDIA QUADRO P1000), with the 575 drivers. NVIDIA-SMI is showing it, and showing it working when Jellyfin is playing something :

r/Proxmox - GPU Passthrough and hardware acceleration in Jellyfin
So here's my problem : the transcoding is working flawlessly for any file I send it from Jellyfin. Like right now a 4K/HDR/7.1 movie is playing on my second screen, with full transcode to 1080p/SDR/Stereo, no frame drop, no problem. BUT, if I want to play another file at the same time (on another screen for example), the transcoding is crashing with this line in the Jellyfin logs :

Code:
ffmpeg version 7.1.1-Jellyfin Copyright (c) 2000-2025 the FFmpeg developers
built with gcc 13 (Ubuntu 13.3.0-6ubuntu2~24.04)
configuration: --prefix=/usr/lib/jellyfin-ffmpeg --target-os=linux --extra-version=Jellyfin --disable-doc --disable-ffplay --disable-static --disable-libxcb --disable-sdl2 --disable-xlib --enable-lto=auto --enable-gpl --enable-version3 --enable-shared --enable-gmp --enable-gnutls --enable-chromaprint --enable-opencl --enable-libdrm --enable-libxml2 --enable-libass --enable-libfreetype --enable-libfribidi --enable-libfontconfig --enable-libharfbuzz --enable-libbluray --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libopenmpt --enable-libdav1d --enable-libsvtav1 --enable-libwebp --enable-libvpx --enable-libx264 --enable-libx265 --enable-libzvbi --enable-libzimg --enable-libfdk-aac --arch=amd64 --enable-libshaderc --enable-libplacebo --enable-vulkan --enable-vaapi --enable-amf --enable-libvpl --enable-ffnvcodec --enable-cuda --enable-cuda-llvm --enable-cuvid --enable-nvdec --enable-nvenc
libavutil 59. 39.100 / 59. 39.100
libavcodec 61. 19.101 / 61. 19.101
libavformat 61. 7.100 / 61. 7.100
libavdevice 61. 3.100 / 61. 3.100
libavfilter 10. 4.100 / 10. 4.100
libswscale 8. 3.100 / 8. 3.100
libswresample 5. 3.100 / 5. 3.100
libpostproc 58. 3.100 / 58. 3.100
[AVHWDeviceContext @ 0x5f00132bcec0] cu->cuCtxCreate(&hwctx->cuda_ctx, desired_flags, hwctx->internal->cuda_device) failed -> CUDA_ERROR_LAUNCH_FAILED: unspecified launch failure
Device creation failed: -542398533.
Failed to set value 'cuda=cu:0' for option 'init_hw_device': Generic error in an external library
Error parsing global options: Generic error in an external libraryffmpeg version 7.1.1-Jellyfin Copyright (c) 2000-2025 the FFmpeg developers
built with gcc 13 (Ubuntu 13.3.0-6ubuntu2~24.04)
configuration: --prefix=/usr/lib/jellyfin-ffmpeg --target-os=linux --extra-version=Jellyfin --disable-doc --disable-ffplay --disable-static --disable-libxcb --disable-sdl2 --disable-xlib --enable-lto=auto --enable-gpl --enable-version3 --enable-shared --enable-gmp --enable-gnutls --enable-chromaprint --enable-opencl --enable-libdrm --enable-libxml2 --enable-libass --enable-libfreetype --enable-libfribidi --enable-libfontconfig --enable-libharfbuzz --enable-libbluray --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libopenmpt --enable-libdav1d --enable-libsvtav1 --enable-libwebp --enable-libvpx --enable-libx264 --enable-libx265 --enable-libzvbi --enable-libzimg --enable-libfdk-aac --arch=amd64 --enable-libshaderc --enable-libplacebo --enable-vulkan --enable-vaapi --enable-amf --enable-libvpl --enable-ffnvcodec --enable-cuda --enable-cuda-llvm --enable-cuvid --enable-nvdec --enable-nvenc
libavutil 59. 39.100 / 59. 39.100
libavcodec 61. 19.101 / 61. 19.101
libavformat 61. 7.100 / 61. 7.100
libavdevice 61. 3.100 / 61. 3.100
libavfilter 10. 4.100 / 10. 4.100
libswscale 8. 3.100 / 8. 3.100
libswresample 5. 3.100 / 5. 3.100
libpostproc 58. 3.100 / 58. 3.100
[AVHWDeviceContext @ 0x5f00132bcec0] cu->cuCtxCreate(&hwctx->cuda_ctx, desired_flags, hwctx->internal->cuda_device) failed -> CUDA_ERROR_LAUNCH_FAILED: unspecified launch failure
Device creation failed: -542398533.
Failed to set value 'cuda=cu:0' for option 'init_hw_device': Generic error in an external library
Error parsing global options: Generic error in an external library

I GUESS the important line is this :

[AVHWDeviceContext @ 0x5f00132bcec0] cu->cuCtxCreate(&hwctx->cuda_ctx, desired_flags, hwctx->internal->cuda_device) failed -> CUDA_ERROR_LAUNCH_FAILED: unspecified launch failure
Device creation failed: -542398533.
Failed to set value 'cuda=cu:0' for option 'init_hw_device': Generic error in an external library

Here's my config in Proxmox for this VM :
1751899950968.png

Available for any information more you could need to help me. It's very frustrating, after weeks of trial and error to manage, at last, to get a working GPU in my VM, only for a silly bug like that.
 
Is there perhaps a limitation in Jellyfin that needs to be set?

The error message CUDA_ERROR_LAUNCH_FAILED: unspecified launch failure indicates that the GPU resources are exhausted or there is a conflict with the CUDA context creation. The card is not particularly powerful either. I have an RTX A2000 here. It works normally (but with Emby).
 
Is there perhaps a limitation in Jellyfin that needs to be set?

The error message CUDA_ERROR_LAUNCH_FAILED: unspecified launch failure indicates that the GPU resources are exhausted or there is a conflict with the CUDA context creation. The card is not particularly powerful either. I have an RTX A2000 here. It works normally (but with Emby).
Well the thing is : I used this card (not this one, but same model) in my bare metal old Jellyfin and it could manage 3 or 4 transcoding at the same time, no sweat. So the only difference is that it's a virtualized environnement, with GPU passthrough. First time for me and MAYBE I fumbled something, but for the life of me I can't find it (and I looked, everywhere).

Indeed it could be a conflict with the CUDA process or whatever (I must confess that the way drivers work on Linux always confused me)
 
I do not use jellyfin so ignore if just plain irrelevant/wrong but according to NVIDIA's Video Encode and Decode GPU Support Matrix this GPU only has 1 encoder/decoder. Might be related to that?
1751903751840.png
1751903773883.png
By the way please don't full-quote messages you're directly replying to.
Edit: Nevermind. I wrote this while you answered above.
 
Last edited:
I do not use jellyfin so ignore if just plain irrelevant/wrong but according to NVIDIA's Video Encode and Decode GPU Support Matrix this GPU only has 1 encoder/decoder. Might be related to that?
View attachment 87762
View attachment 87763
By the way please don't full-quote messages you're directly replying to.
Well no, the limit is hardware, but in the software there is 8 slot for encoding. (at least that's my understanding, I did multiples transcode with the very same card anyway in my old baremetal server)
 
Okay, do you have ressource maybe ?

Google it. It's like the first result.

The reason you didn't have this limitation on the previous setup was because nvidia only started implementing these limitations in the drivers in like, the last year or two.
 
  • Like
Reactions: fireon
Found the patch, applied it, did the test to check if the patch is applied, the problem is still here even after a reboot.

EDIT : well I think I'll live with that and ask my friend to download the file from Jellyfin if there is some transcoding issue and that's it
 
Last edited:
Google it. It's like the first result.

The reason you didn't have this limitation on the previous setup was because nvidia only started implementing these limitations in the drivers in like, the last year or two.
You're wrong. The P1000 is limited to 8 concurrent sessions. They've started to *lift* these session limits, it used to be 2, then 3, then 5, and now 8 on consumer cards. Quadros are generally unrestricted.

https://videocardz.com/newz/nvdia-g...rt-up-to-8-concurrent-nvenc-encoding-sessions
 
You're wrong. The P1000 is limited to 8 concurrent sessions. They've started to *lift* these session limits, it used to be 2, then 3, then 5, and now 8 on consumer cards. Quadros are generally unrestricted.

https://videocardz.com/newz/nvdia-g...rt-up-to-8-concurrent-nvenc-encoding-sessions
That's encoding sessions.

NVDEC is still 1 stream per GPU on the P600.

Coincidentally, NVDEC is likely being used along with NVENC on each video.

Rather then claim someone is wrong, do your research first.

https://developer.nvidia.com/video-encode-and-decode-gpu-support-matrix-new
 
The NVDEC table doesnt say anything about the number of sessions. Just the # of "chips". Not the same thing.

You're right. And isn't that odd?

How about you try with your nvidia GPU and let us know how many NVDEC sessions it can run in one go.
 
Listen I think that indeed there is a reasonable limit to concurrent session. On my old bare metal I think I had the 550 drivers and multiple transcoding were possible, with the same card. Here I'm on the 575-server. The error is present in every drivers, in every configurations I tried, with or without the patch, everytime it's the same. I could try to test ffmpeg transcoding on the host to see if it's the card that is damaged for some reason, but well it doesn't really matter. My user will download there files if transcoding is not working and that'll be enough. I'll try anything you think to resolve the issue, but considering that it work for one file at a time, the task is succesfully failed.
 
Maybe it's because even 1 transcode is at 58% GPU Utilization (according to your first picture). I tried and mine (GTX 1660) is at 24% with a HEVC->HEVC 4K 40Mbps->1080p 10Mbps transcode