Since i began my proxmox journey i was always coming back to this beautiful place here for getting hints and advice, which i really appreciate and first want to give a big thanks to all of the beautiful minds here.
[my proxmox-node]
pve8.4.1
ASRock B550M Pro4
Ryzen 5 PRO 5650G
64GB DDR4 Kingston Server Premier ECC (2x32)
PowerColor Red Devil 7900 XTX 24GB
Just recently added the XTX for AI workloads (and maybe gaming).
[what i want to achieve]
Planning to spin up a first LXC for jellyfin where i want to utilize the Ryzen 5 PRO 5650G iGPU for transcoding. Then another LXC for ollama/openwebui which will be using the 7900 XTX.
Bonus-Question #0: Is it possible to somehow build a gaming-stream-LXC or do i have to spin up a VM for that (e. g. windows + sunshine/apollo stream which i already managed to setup yesterday already) which would then not allow me to use iGPU or dGPU for LXCs if i understand correctly?
Tried a lot of things from loads of forums/threads and could manage to passthrough the dGPU to a windows VM and stream with apollo to a moonlight client, which worked great, but that was mainly for testing. BUT i suffered the very annoying reset-bug, tried a lot including preloading the correct ROM but could not get rid of it.
[where i stand now]
/etc/default/grub
Following Brandon Lee's guide i removed all gpu-blacklists from proxmox-host which were needed to passthrough to VM, also commented out my /etc/modprobe.d/vfio.conf
Then i ran
Status now
I also setup 3 LXC already with some passthrough code added (see question #5: Jim's video).
I would now install the AMD GPU drivers on the host for the LXCs to be able to utilize them but found out that "firmware-amd-graphics" is availlable in the non-free-repository, but i see drivers amdgpu for both cards loaded. Then i thought about ROCm drivers i installed on another test-machine for an 6700 XT ollama VM but quickly found out that i could/should not install these on my proxmox-host because it could mess up my iGPU since it does not (?) support ROCm.
[questions]
#1: Should i update BIOS to beta version? I'm running my boards 3.20 BIOS from 2023/10/5. Should i upgrade to a beta BIOS like 3.61 from 2025/4/10 or stick to 3.20 / upgrade to stable 3.40?
#2: Is it possible to passthrough the iGPU to an LXC (for jellyfin) and the dGPU to an VM (for ollama/gaming) at the same time? Seems to be the handiest solution for me.
#3: Can i use the iGPU on one LXC and install the AMD drivers for it only there, same with the dGPU on another LXC and install the ROCm drivers there to not interfere with the iGPU exclusively used by the first LXC?
#4: Is my planned usecase, (i)GPU-accelerated jellyfin + dGPU ollama (+ optional gaming) possible and how would i achieve that?
#5: Funny to see Jim split-passthrough a GPU to several LXCs. Thought about if this makes sense for my planned usecase like split my 7900XTX for ollama, gaming, transcoding and other stuff for future projects. What do you think about that?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
[UPDATE some hours later]
Since i had issues with renders owner/groups (nobody_ssh?) on the LXC after blindly following Jim's video, i found out that with the actual pve-version it might be as easy as just add the render-device via GUI. Which i know tried on my jellyfin LXC (203) with "/dev/dri/renderD129=gid129" after removing all the "lxc.cgroup.." and "lxc.idmap.." entries i manually added.
The LXC now shows
Well i also fed the "card1 device" to it not knowing if this might be bad, but my initial thought was that i want to kinda isolate the iGPU with that on the LXC 203 so that the dGPU and the ROCm drivers i plan to install on the LXC(s) where the dGPU will be used might not be able to interfere with the iGPU? If this makes sense...
I can also imagine that only passing through the render and not the card device itself might be a prerequisite for a GPU to be shared across different LXC's like Jim shows in his video.
So question #6 is, if i am right with this assumption and can keep the "card1" also added to the jellyfin LXC OR if this might get me into trouble somehow?
With these changes i fired up the jellyfin container again and could verify that it now correctly uses the iGPU for transcoding.
Next step: Trying to get the ROCm drivers to work on the ollama and gaming LXC and looking forward to feedback to my thread <3
[my proxmox-node]
pve8.4.1
ASRock B550M Pro4
Ryzen 5 PRO 5650G
64GB DDR4 Kingston Server Premier ECC (2x32)
PowerColor Red Devil 7900 XTX 24GB
Just recently added the XTX for AI workloads (and maybe gaming).
[what i want to achieve]
Planning to spin up a first LXC for jellyfin where i want to utilize the Ryzen 5 PRO 5650G iGPU for transcoding. Then another LXC for ollama/openwebui which will be using the 7900 XTX.
Bonus-Question #0: Is it possible to somehow build a gaming-stream-LXC or do i have to spin up a VM for that (e. g. windows + sunshine/apollo stream which i already managed to setup yesterday already) which would then not allow me to use iGPU or dGPU for LXCs if i understand correctly?
Tried a lot of things from loads of forums/threads and could manage to passthrough the dGPU to a windows VM and stream with apollo to a moonlight client, which worked great, but that was mainly for testing. BUT i suffered the very annoying reset-bug, tried a lot including preloading the correct ROM but could not get rid of it.
[where i stand now]
/etc/default/grub
Code:
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet iommu=pt"
GRUB_CMDLINE_LINUX=""
Following Brandon Lee's guide i removed all gpu-blacklists from proxmox-host which were needed to passthrough to VM, also commented out my /etc/modprobe.d/vfio.conf
Code:
#options vfio-pci ids=1002:744c,1002:ab30,1002:1638 disable_vga=1 # these are dGPU, dGPUaudio, iGPU
#softdep amdgpu pre: vfio-pci
#softdep snd_hda_intel pre: vfio-pci
Then i ran
Code:
update-initramfs -u -k all
apt install pve-headers-$(uname -r)
apt install build-essential software-properties-common make -y
update-initramfs -u
Status now
Code:
root@proxmox-asrock:~# ls -ll /dev/dri
total 0
drwxr-xr-x 2 root root 120 Jun 10 21:13 by-path
crw-rw---- 1 root video 226, 0 Jun 10 21:13 card0
crw-rw---- 1 root video 226, 1 Jun 10 20:51 card1
crw-rw---- 1 root render 226, 128 Jun 10 20:51 renderD128
crw-rw---- 1 root render 226, 129 Jun 10 20:51 renderD129
Code:
root@proxmox-asrock:~# ls -l /sys/class/drm/renderD*/device/driver
lrwxrwxrwx 1 root root 0 Jun 11 08:39 /sys/class/drm/renderD128/device/driver -> ../../../../../../bus/pci/drivers/amdgpu
lrwxrwxrwx 1 root root 0 Jun 11 08:39 /sys/class/drm/renderD129/device/driver -> ../../../../bus/pci/drivers/amdgpu
I also setup 3 LXC already with some passthrough code added (see question #5: Jim's video).
Code:
[ollama w/ dGPU]
arch: amd64
cores: 8
cpulimit: 8
features: nesting=1
hostname: ollama
memory: 32768
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=BC:24:11:D3:C2:0A,ip=dhcp,type=veth
ostype: debian
rootfs: nvme:201/vm-201-disk-0.raw,size=320G
swap: 4096
unprivileged: 1
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
lxc.idmap: u 0 100000 65536
lxc.idmap: g 0 100000 44
lxc.idmap: g 44 44 1
lxc.idmap: g 45 100045 62
lxc.idmap: g 107 104 1
lxc.idmap: g 108 100108 65428
[gaming w/ dGPU]
arch: amd64
cores: 8
cpulimit: 8
features: nesting=1
hostname: gaming
memory: 16384
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=BC:24:11:F6:51:03,ip=dhcp,type=veth
ostype: debian
rootfs: nvme:202/vm-202-disk-0.raw,size=120G
swap: 8192
unprivileged: 1
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
lxc.idmap: u 0 100000 65536
lxc.idmap: g 0 100000 44
lxc.idmap: g 44 44 1
lxc.idmap: g 45 100045 62
lxc.idmap: g 107 104 1
lxc.idmap: g 108 100108 65428
[jellyfin w/ iGPU]
arch: amd64
cores: 2
cpulimit: 2
features: nesting=1
hostname: jellyfin
memory: 4096
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=BC:24:11:D3:A6:7C,ip=dhcp,type=veth
ostype: debian
rootfs: nvme:203/vm-203-disk-0.raw,size=32G
swap: 512
unprivileged: 1
lxc.cgroup2.devices.allow: c 226:1 rwm
lxc.cgroup2.devices.allow: c 226:129 rwm
lxc.mount.entry: /dev/dri/renderD129 dev/dri/renderD129 none bind,optional,create=file
lxc.idmap: u 0 100000 65536
lxc.idmap: g 0 100000 44
lxc.idmap: g 44 44 1
lxc.idmap: g 45 100045 62
lxc.idmap: g 107 104 1
lxc.idmap: g 108 100108 65428
I would now install the AMD GPU drivers on the host for the LXCs to be able to utilize them but found out that "firmware-amd-graphics" is availlable in the non-free-repository, but i see drivers amdgpu for both cards loaded. Then i thought about ROCm drivers i installed on another test-machine for an 6700 XT ollama VM but quickly found out that i could/should not install these on my proxmox-host because it could mess up my iGPU since it does not (?) support ROCm.
[questions]
#1: Should i update BIOS to beta version? I'm running my boards 3.20 BIOS from 2023/10/5. Should i upgrade to a beta BIOS like 3.61 from 2025/4/10 or stick to 3.20 / upgrade to stable 3.40?
#2: Is it possible to passthrough the iGPU to an LXC (for jellyfin) and the dGPU to an VM (for ollama/gaming) at the same time? Seems to be the handiest solution for me.
#3: Can i use the iGPU on one LXC and install the AMD drivers for it only there, same with the dGPU on another LXC and install the ROCm drivers there to not interfere with the iGPU exclusively used by the first LXC?
#4: Is my planned usecase, (i)GPU-accelerated jellyfin + dGPU ollama (+ optional gaming) possible and how would i achieve that?
#5: Funny to see Jim split-passthrough a GPU to several LXCs. Thought about if this makes sense for my planned usecase like split my 7900XTX for ollama, gaming, transcoding and other stuff for future projects. What do you think about that?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
[UPDATE some hours later]
Since i had issues with renders owner/groups (nobody_ssh?) on the LXC after blindly following Jim's video, i found out that with the actual pve-version it might be as easy as just add the render-device via GUI. Which i know tried on my jellyfin LXC (203) with "/dev/dri/renderD129=gid129" after removing all the "lxc.cgroup.." and "lxc.idmap.." entries i manually added.
The LXC now shows
Code:
root@jellyfin:~# ls -ll /dev/dri
total 0
crw-rw---- 1 root video 226, 1 Jun 11 11:21 card1
crw-rw---- 1 root 129 226, 129 Jun 11 11:21 renderD129

I can also imagine that only passing through the render and not the card device itself might be a prerequisite for a GPU to be shared across different LXC's like Jim shows in his video.
So question #6 is, if i am right with this assumption and can keep the "card1" also added to the jellyfin LXC OR if this might get me into trouble somehow?
With these changes i fired up the jellyfin container again and could verify that it now correctly uses the iGPU for transcoding.
Next step: Trying to get the ROCm drivers to work on the ollama and gaming LXC and looking forward to feedback to my thread <3
Last edited: