AMD Ryzen 7 "Renoir" 4750G APU and iGPU pass-thru (to Windows 10 guest)?

pottproll

New Member
Dec 3, 2020
12
5
3
31
needed it for LXC route anyway
pls let me know if youre making progress! Im pretty new to proxmox and linux..
I tried to passthrough the iGPU to a debian jellyfin vm but didnt get the hardware acceleration (vaapi in this case) working. Now im on kernel 5.11 and try to run it in lxc but even that doesnt work so far. The firmware-amd-graphics package conflicts with pve so I tried without it with mesa 20.3 from testing repo but cant get hw transcoding working.
jellyfin log:
Code:
[AVHWDeviceContext @ 0x5613f51e8540] libva: /usr/lib/x86_64-linux-gnu/dri/radeonsi_drv_video.so init failed
[AVHWDeviceContext @ 0x5613f51e8540] Failed to initialise VAAPI connection: 2 (resource allocation failed).
Device creation failed: -5.
Failed to set value '/dev/dri/renderD128' for option 'vaapi_device': Input/output error
Error parsing global options: Input/output error
 

thex

New Member
Mar 25, 2021
19
3
3
40
I did get it running and it works great now. However it was quite a journey ;)

Code:
#important stuff for jellyfin in unprivileged lxc
#we need the renderer device passed through
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
lxc.cgroup.devices.allow: c 226:128 rwm
#we need to map some user/group IDs so that they are identical inside the guest
#mapping uid/gid 1000 to 1000 as my local media shares are mounted for this user and mapping the renderer group ID too (108)
lxc.idmap: u 0 100000 999
lxc.idmap: g 0 100000 106
lxc.idmap: g 108 108 1
lxc.idmap: g 109 100109 890
lxc.idmap: u 1000 1000 1
lxc.idmap: g 1000 1000 1
lxc.idmap: u 1001 101001 64530
lxc.idmap: g 1001 101001 64530
lxc.idmap: g 65534 165534 1

there are a few more steps to the mapping described here (subgid/subuid files etc.): https://pve.proxmox.com/wiki/Unprivileged_LXC_containers

I used ubuntu as a base for the installation and there the renderer gid was 109 so I had to change the gids to match the host (might be resolveable with different mapping)

I also installed the AMD drivers with some hacks on both the host and the guest and have still to investigate where they are actually needed)
hacks: Install the packages directly not via the amd script, add i386 architecture to the repos, disguise proxmox debian as ubuntu, use the drivers meant for 18.04, installation of two packages will fail and you might to remove the blacklist file for amd-gpu manually...

Quite a few hacks, would like to do a writeup but no Idea when I will find some time...
 
  • Like
Reactions: James Crook

pottproll

New Member
Dec 3, 2020
12
5
3
31
I also installed the AMD drivers with some hacks on both the host and the guest and have still to investigate where they are actually needed)
I guess its the host where theyre needed. I tried with privileged lxc and changed the render group to match the host render group. Ill try with unpriveleged lxc and idmapping described in the wiki now. If thats not working ill go for the amd microcode on the host..

But its nice to hear, that its actually working. Did you try HEVC encoding?
 

ShotgunPayDay

New Member
May 12, 2021
2
3
3
34
EDIT: 5.11 kernel isn't required so skip that install under the Proxmox section if you want to stay a little more stable.

I decided to post my notes and not necessarily a guide on getting GPU passthrough working for a Debian LXC container with Renoir (I lazily copy and pasted my editor). This was gratuitously taken from thex and countless others across many forums. My strategy was just plain being lazy and reflecting the outside environment (Proxmox) to the Container (Proxmox amd drivers). I hope this helps and sorry if violated any forum rules (just made an account as I finally have separate computer to act as a server).

It was tested with LXQT + x2go + x2go client and neverball which was fun (still a little choppy over the wifi network, but it worked as a proof of concept as before it was unplayable on cpu only). Another note is I don't know if [libgl1-mesa-dri libglx-mesa0 mesa-vulkan-drivers xserver-xorg-video-all] needed to be installed in Proxmox and the container, but my logic was symmetry between the container and host. The original goal was to get a DE with accelerated graphics and sound. I didn't bother at the end as I'm too spoiled by ease of use with PopOS and Wayland even without gpu acceleration.

Sorry if there are errors in my notes. I didn't do this as a one to one process, but I tried to keep everything in a logical order. I also didn't use spellcheck.

Bash:
#For DeskMini X300 AMD Renoir 4650G Install
#PROXMOX INSTALL===== (this is for a weird tty3 bug if it won't install)
#Installed with ZFS raid 0 single disks to get systemd
chmod 1777 /tmp
apt update
apt upgrade
Xorg -configure
mv /xorg.conf.new /etc/X11/xorg.conf

nano /etc/X11/xorg.conf
#Under Section "Device"
#Under Identifier "Card1"
#Only update the Driver line ""
Driver "fbdev"

startx
#Finish install as normal

#PROXMOX=============
nano /etc/apt/sources.list
#Add no subscription repo
#Not for production use
deb http://download.proxmox.com/debian/pve buster pve-no-subscription

nano /etc/apt/sources.list.d/pve-enterprise.list
#Comment out enterprise repo here

apt update
apt dist-upgrade
apt install pve-kernel-5.11 pve-headers libgl1-mesa-dri libglx-mesa0 mesa-vulkan-drivers xserver-xorg-video-all

nano /etc/kernel/cmdline
#Append for Renior GPU support
amdgpu.exp_hw_support=1

pve-efiboot-tool refresh
reboot

#DEBIAN UNPRIVILIGED CONTAINER====
nano /etc/apt/sources.list
#I just replaced the contents of this file with the one on proxmox

wget http://download.proxmox.com/debian/proxmox-ve-release-6.x.gpg -O /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
apt update
apt dist-upgrade
apt install pve-firmware libgl1-mesa-dri libglx-mesa0 mesa-vulkan-drivers xserver-xorg-video-all
shutdown

#PROXMOX=============
#Add to the end of your id.conf
#cgroup device was the same as thex's
nano /etc/pve/lxc/<your container id>.conf
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
lxc.cgroup.devices.allow: c 226:128 rwm

#start your container and you're done!

The problem now is I don't need a DE in a container and I don't need Jellyfin as I don't really transcode anything.
I guess I'll be teaching myself Machine Learning? If anyone has any cool ideas for a headless GPU container let me know.

I hope this helps.
 
Last edited:

paulmorabi

Member
Mar 30, 2019
81
8
13
42
I decided to post my notes and not necessarily a guide on getting GPU passthrough working for a Debian LXC container with Renoir (I lazily copy and pasted my editor). This was gratuitously taken from thex and countless others across many forums. My strategy was just plain being lazy and reflecting the outside environment (Proxmox) to the Container (Proxmox amd drivers). I hope this helps and sorry if violated any forum rules (just made an account as I finally have separate computer to act as a server).

It was tested with LXQT + x2go + x2go client and neverball which was fun (still a little choppy over the wifi network, but it worked as a proof of concept as before it was unplayable on cpu only). Another note is I don't know if [libgl1-mesa-dri libglx-mesa0 mesa-vulkan-drivers xserver-xorg-video-all] needed to be installed in Proxmox and the container, but my logic was symmetry between the container and host. The original goal was to get a DE with accelerated graphics and sound. I didn't bother at the end as I'm too spoiled by ease of use with PopOS and Wayland even without gpu acceleration.

Sorry if there are errors in my notes. I didn't do this as a one to one process, but I tried to keep everything in a logical order. I also didn't use spellcheck.

Bash:
#For DeskMini X300 AMD Renoir 4650G Install
#PROXMOX INSTALL===== (this is for a weird tty3 bug if it won't install)
#Installed with ZFS raid 0 single disks to get systemd
chmod 1777 /tmp
apt update
apt upgrade
Xorg -configure
mv /xorg.conf.new /etc/X11/xorg.conf

nano /etc/X11/xorg.conf
#Under Section "Device"
#Under Identifier "Card1"
#Only update the Driver line ""
Driver "fbdev"

startx
#Finish install as normal

#PROXMOX=============
nano /etc/apt/sources.list
#Add no subscription repo
#Not for production use
deb http://download.proxmox.com/debian/pve buster pve-no-subscription

nano /etc/apt/sources.list.d/pve-enterprise.list
#Comment out enterprise repo here

apt update
apt dist-upgrade
apt install pve-kernel-5.11 pve-headers libgl1-mesa-dri libglx-mesa0 mesa-vulkan-drivers xserver-xorg-video-all

nano /etc/kernel/cmdline
#Append for Renior GPU support
amdgpu.exp_hw_support=1

pve-efiboot-tool refresh
reboot

#DEBIAN UNPRIVILIGED CONTAINER====
nano /etc/apt/sources.list
#I just replaced the contents of this file with the one on proxmox

wget http://download.proxmox.com/debian/proxmox-ve-release-6.x.gpg -O /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
apt update
apt dist-upgrade
apt install pve-firmware libgl1-mesa-dri libglx-mesa0 mesa-vulkan-drivers xserver-xorg-video-all
shutdown

#PROXMOX=============
#Add to the end of your id.conf
#cgroup device was the same as thex's
nano /etc/pve/lxc/<your container id>.conf
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
lxc.cgroup.devices.allow: c 226:128 rwm

#start your container and you're done!

The problem now is I don't need a DE in a container and I don't need Jellyfin as I don't really transcode anything.
I guess I'll be teaching myself Machine Learning? If anyone has any cool ideas for a headless GPU container let me know.

I hope this helps.

Thanks for this. I have exactly the same setup as you (ASRock X300 & 4650G) and am trying to get VAAPI working with Frigate. I've upgraded to kernel 5.11 and installed the packages above. I can see amdgpu is loading the IGPU and /dev/dri is populated but VAAPI always fails ("Failed to initialise VAAPI connection: -1 (unknown libva error)"). Any ideas? The Jellyfin docs and other googling seems to point to needing mesa library 20.3+ or the AMD GPU drivers (which only build on kernel 5.4 and below). What version mesa library do you have in your host and container? I have 20.3 for mesa-va-drivers in the container and 18.3 on the proxmox host (not sure why there is a difference either).
 
Last edited:

ShotgunPayDay

New Member
May 12, 2021
2
3
3
34
Thanks for this. I have exactly the same setup as you (ASRock X300 & 4650G) and am trying to get VAAPI working with Frigate. I've upgraded to kernel 5.11 and installed the packages above. I can see amdgpu is loading the IGPU and /dev/dri is populated but VAAPI always fails ("Failed to initialise VAAPI connection: -1 (unknown libva error)"). Any ideas? The Jellyfin docs and other googling seems to point to needing mesa library 20.3+ or the AMD GPU drivers (which only build on kernel 5.4 and below). What version mesa library do you have in your host and container? I have 20.3 for mesa-va-drivers in the container and 18.3 on the proxmox host (not sure why there is a difference either).
Actually, I did a reinstall and the 5.11 kernel isn't required. I'd try it with kernel 5.4 first. I never tried to install Frigate or Jellyfin so I'm unfortunately not well versed on that part as my goal was desktop in a container.
 

paulmorabi

Member
Mar 30, 2019
81
8
13
42
Actually, I did a reinstall and the 5.11 kernel isn't required. I'd try it with kernel 5.4 first. I never tried to install Frigate or Jellyfin so I'm unfortunately not well versed on that part as my goal was desktop in a container.

Thanks. Yes, 5.4 works as well if you enable the additional flag for AMD GPU's. I've got Jellyfin working and am working on Frigate. It seems very closely linked with mesa and libva drivers.
 
  • Like
Reactions: ShotgunPayDay

pottproll

New Member
Dec 3, 2020
12
5
3
31
20.3 for mesa-va-drivers in the container and 18.3 on the proxmox host
I have not had much time since the beginning of April but after checking I installed mesa 20.3 and firmware-amd-graphic (not sure if both nessesary) in the container (hat mesa 20.3 on the host already and an old mesa in the container). Now VAAPI and HEVC encoding with jellyfin works great!

Do you know, where which drivers exactly are necessary?
 

paulmorabi

Member
Mar 30, 2019
81
8
13
42
Not 100% sure but definitely Kernel 5.4+ as this includes amdgpu. Then you need updated VAAPI related libraries like Mesa and libva. I've got upgraded Mesa in my frigate container but as compared to Jellyfin, I don't have updated libva and can't seem to find the same version that Jellyfin is using (at least from a public PPA).
 

pottproll

New Member
Dec 3, 2020
12
5
3
31
Yes you need Kernel >5.4 or experimental HW Support to get the host recognize the amdgpu at all. In the LXC you need the right id mapping to get access to the render/video group like thex mentioned. On my standard debian buster container I installed libgl1-mesa-dri libglx-mesa0 mesa-va-drivers from debian testing and firmware-amd-graphic from stable non-free. For VAAPI HEVC encoding you need mesa 20.1+. I dont exactly know where my problem was since im pretty new to Linux and Proxmox. Maybe it was the older mesa (VAAPI with HEVC decode and x264 encode should have worked then before though) maybe its the amd firmware what made it working.
 

paulmorabi

Member
Mar 30, 2019
81
8
13
42
Definitely need updated mesa and libva alongside the kernel. Not sure about the AMD firmware either.

I've got Jellyfin working in docker in LXC fine. I tried installing the updated libraries into the frigate container but while vainfo shows transcoding is supported, frigate is not working. Hopefully the developer can assist with this.
 
  • Like
Reactions: nemesis-23

micky1067

New Member
Aug 24, 2021
8
2
3
54
Hi.. very interesting.
Can someone write a wiki step by step ? I like to have also a passthrough for my deskmini a300 and ryzen 4750g for win10 and Linux VM.
Hope it will run.
 

RtcBoy

New Member
Sep 20, 2021
1
0
1
39
@NetworkingMicrobe
I use GOTupd tool made an uefi igpu bios which can go with OVMF.
I can see desktop before driver install,but I got the same problem after driver installed.
I think GPU generat multi display layer and at final step all layers mixed together to a single screen, the mouse layer is good and the others are corrupted.May be without driver installed all image use a single same display layer as mouse.
Some other info said old driver without amd link remote display func can work at some Radeon addin card, I will try old 3400G APU.
May the new driver put the image to some new place where remote display func can read from.
Attach file is my bios.
 

Attachments

  • Renoir_Generic_VBIOS_updGOP.zip
    76.7 KB · Views: 18
Last edited:

Miniterror

New Member
Oct 25, 2021
11
1
3
34
Im reading and trying but im unable to get this working.
Have a Asrock deskmini x300 with 5700G, would really like to have the GPU part passthrough to a Ubuntu 20.04 LXC that has Plex on it.
Have tried it like 20 times now and none where succesfull.
Could some one help me out in getting in to work, have to admit though im not verry good with Linux and all the ID of different parts and possibly that is the reason i am not getting this to work.
 

paulmorabi

Member
Mar 30, 2019
81
8
13
42
Im reading and trying but im unable to get this working.
Have a Asrock deskmini x300 with 5700G, would really like to have the GPU part passthrough to a Ubuntu 20.04 LXC that has Plex on it.
Have tried it like 20 times now and none where succesfull.
Could some one help me out in getting in to work, have to admit though im not verry good with Linux and all the ID of different parts and possibly that is the reason i am not getting this to work.

It might be worth sharing your configuration for the lxc container and also the version of Proxmox you are running, including kernel. Also, any error messages you are getting or where it is not working.

As an example, for lxc, you need to have special entries for passing through the GPU in the config. This is located in /etc/pve/lxc/. mine contains:

arch: amd64 cores: 6 features: nesting=1 hostname: xxxx memory: 4096 nameserver: xxxx net0: name=xxxx ostype: debian rootfs: local-lvm:vm-100-disk-0,size=64G swap: 4096 lxc.cgroup2.devices.allow: c 226:0 rwm lxc.cgroup2.devices.allow: c 226:128 rwm lxc.cgroup2.devices.allow: c 29:0 rwm lxc.cgroup2.devices.allow: c 189:* rwm lxc.apparmor.profile: unconfined lxc.cgroup2.devices.allow: a lxc.cap.drop: lxc.mount.entry: /dev/dri/ dev/dri/ none bind,optional,create=dir 0, 0

I've added xxx in a few places but generally you need "nesting=1" and the lxc entries.

Within the LXC, you may need to install updated mesa drivers (20.3+). With Ubuntu this should not be an issue (I believe there is a repo for this called kisak). I can't help much with plex but my understanding is enabling transcoding is via the GUI.
 
  • Like
Reactions: Miniterror

Miniterror

New Member
Oct 25, 2021
11
1
3
34
It might be worth sharing your configuration for the lxc container and also the version of Proxmox you are running, including kernel. Also, any error messages you are getting or where it is not working.

As an example, for lxc, you need to have special entries for passing through the GPU in the config. This is located in /etc/pve/lxc/. mine contains:

arch: amd64 cores: 6 features: nesting=1 hostname: xxxx memory: 4096 nameserver: xxxx net0: name=xxxx ostype: debian rootfs: local-lvm:vm-100-disk-0,size=64G swap: 4096 lxc.cgroup2.devices.allow: c 226:0 rwm lxc.cgroup2.devices.allow: c 226:128 rwm lxc.cgroup2.devices.allow: c 29:0 rwm lxc.cgroup2.devices.allow: c 189:* rwm lxc.apparmor.profile: unconfined lxc.cgroup2.devices.allow: a lxc.cap.drop: lxc.mount.entry: /dev/dri/ dev/dri/ none bind,optional,create=dir 0, 0

I've added xxx in a few places but generally you need "nesting=1" and the lxc entries.

Within the LXC, you may need to install updated mesa drivers (20.3+). With Ubuntu this should not be an issue (I believe there is a repo for this called kisak). I can't help much with plex but my understanding is enabling transcoding is via the GUI.
Thank you, this give me a idea why it isnt working, mine is a privileged container as i had a NFS share mounted in it and couldnt enable nesting.
Proxmox is 7.1-7 version, kernel is default from that version i assume, i have never done a manual update of that part.

In the privileged container i could get the /dev/dri part as showing but i couldnt actually use it.
Probably because of the nesting being disabled.
I will rebuild the LXC as the NFS share isnt needed anymore within the LXC and try it with nesting enabled and your lxc.conf but adjusted to my own needs for the container itself.
Thx for sharing, apriciate it.
 
  • Like
Reactions: paulmorabi

Miniterror

New Member
Oct 25, 2021
11
1
3
34
Just gave it another shot but im not able to get it working, or atleast i do not see it using HW acceleration, just trying to use pure CPU power.
I setup a new LXC with Ubuntu Server 20.04.3, nesting is enabled as recommended and i added the MESA driver by adding the kisak repo.
See below the lines i have in the /etc/pve/lxc/105.conf.

pveversion: proxmox-ve: 7.1-1 (running kernel: 5.13.19-1-pve) according to pveversion -e

arch: amd64 cores: 4 features: nesting=1 hostname: Plex-test memory: 2048 mp0: /mnt/pve/Samsung,mp=/mnt nameserver: X.X.X.X net0: name=eth0,bridge=vmbr0,firewall=1,gw=X.X.X.X,hwaddr=AA:BB:CC:12:34:56,ip=X.X.X.X/24,type=veth ostype: ubuntu rootfs: local:105/vm-105-disk-0.raw,size=50G swap: 512 unprivileged: 1 lxc.cgroup2.devices.allow: c 226:0 rwm lxc.cgroup2.devices.allow: c 226:128 rwm lxc.cgroup2.devices.allow: c 29:0 rwm lxc.cgroup2.devices.allow: c 189:* rwm lxc.apparmor.profile: unconfined lxc.cgroup2.devices.allow: a lxc.cap.drop: lxc.mount.entry: /dev/dri/ dev/dri/ none bind,optional,create=dir 0, 0

What i see in the LXC, radeontop cant load, this gives the error "Cannot access GPU registers, are you root?", also when doing sudo.
Trying glxinfo i get the error "Error: unable to open display"
Doing radeontop inside the PVE itself it shows unknown GPU.

ls /dev/dri/ shows "by-path card0 renderD128"
 

paulmorabi

Member
Mar 30, 2019
81
8
13
42
Just gave it another shot but im not able to get it working, or atleast i do not see it using HW acceleration, just trying to use pure CPU power.
I setup a new LXC with Ubuntu Server 20.04.3, nesting is enabled as recommended and i added the MESA driver by adding the kisak repo.
See below the lines i have in the /etc/pve/lxc/105.conf.

pveversion: proxmox-ve: 7.1-1 (running kernel: 5.13.19-1-pve) according to pveversion -e

arch: amd64 cores: 4 features: nesting=1 hostname: Plex-test memory: 2048 mp0: /mnt/pve/Samsung,mp=/mnt nameserver: X.X.X.X net0: name=eth0,bridge=vmbr0,firewall=1,gw=X.X.X.X,hwaddr=AA:BB:CC:12:34:56,ip=X.X.X.X/24,type=veth ostype: ubuntu rootfs: local:105/vm-105-disk-0.raw,size=50G swap: 512 unprivileged: 1 lxc.cgroup2.devices.allow: c 226:0 rwm lxc.cgroup2.devices.allow: c 226:128 rwm lxc.cgroup2.devices.allow: c 29:0 rwm lxc.cgroup2.devices.allow: c 189:* rwm lxc.apparmor.profile: unconfined lxc.cgroup2.devices.allow: a lxc.cap.drop: lxc.mount.entry: /dev/dri/ dev/dri/ none bind,optional,create=dir 0, 0

What i see in the LXC, radeontop cant load, this gives the error "Cannot access GPU registers, are you root?", also when doing sudo.
Trying glxinfo i get the error "Error: unable to open display"
Doing radeontop inside the PVE itself it shows unknown GPU.

ls /dev/dri/ shows "by-path card0 renderD128"

Hmm if you can see card0 etc. then it is passing the GPU through. What happens if you run vainfo? Also, can you make the container privileged?
 

Miniterror

New Member
Oct 25, 2021
11
1
3
34
Hmm if you can see card0 etc. then it is passing the GPU through. What happens if you run vainfo? Also, can you make the container privileged?
If nesting is needed then a privileged container wont work as that wont allow nesting.

Vainfo shows:
Error: XDG_RUNTIME_DIR not set in the environment.
Error: can't connect to x server!
Error: failed to initialize display.

At the moment i cant test, this afternoon i decided i made so many changes also at PVE level i wanted to reinstall the entire server.
As its only for home usage im not to bothered to nuke the entire system.
At the moment im almost done reinstalling everything again.

Next attempt wil be later on.
So to be clear.
1. Add the entrys for the lxc in the conf file at pve level.
2. Install kisak repo and install .
3. Should be a win and nothing is needed at pve level?

My setup is a 5700G, not the 4750G that is being discussed, could it be a different way of iGPU implementation for the 5xxx series?
 

paulmorabi

Member
Mar 30, 2019
81
8
13
42
If nesting is needed then a privileged container wont work as that wont allow nesting.

Vainfo shows:
Error: XDG_RUNTIME_DIR not set in the environment.
Error: can't connect to x server!
Error: failed to initialize display.

At the moment i cant test, this afternoon i decided i made so many changes also at PVE level i wanted to reinstall the entire server.
As its only for home usage im not to bothered to nuke the entire system.
At the moment im almost done reinstalling everything again.

Next attempt wil be later on.
So to be clear.
1. Add the entrys for the lxc in the conf file at pve level.
2. Install kisak repo and install .
3. Should be a win and nothing is needed at pve level?

My setup is a 5700G, not the 4750G that is being discussed, could it be a different way of iGPU implementation for the 5xxx series?

Yes it could be different because you have a 5x series with integrated GPU. I'm not sure on what the kernel and new library compatibility requirements are for it. A quick Google seems to suggest others have passed through the igpu successfully.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!