AMD Ryzen 7 "Renoir" 4750G APU and iGPU pass-thru (to Windows 10 guest)?

NetworkingMicrobe

New Member
Feb 25, 2021
19
2
3
34
Hello!

I recently built a SFF system with the newer Ryzen 7 "Renoir" 4750G APU and ASRock X300 case/motherboard combo. I'm hoping to set it up with PVE (v6.3-1) with several guests, and would like to pass through its integrated GPU (Radeon RX Vega-based) to a Windows 10 VM.

According to this forum post, I will likely have to first enable the experimental Renoir APU drivers in the kernel upon boot, since they're only standard in kernel 5.5, using the command: amdgpu.exp_hw_support=1
Then I am thinking to find my iGPU bus & function numbers via the lspci command, to then run lspci -nks <bus/function>, so I can get the kernel module name, which I would then add to the /etc/modprobe.d/pve-blacklist.conf file. I guess I could even add the module name to the modprobe.blacklist=... kernel parameter in GRUB as well.
This is where I'm at a loss of what I'd need to do next to next to actually assign the iGPU to the VM.

Any tips on what the process may look like once the unit arrives, or if this is a lost cause? Has anyone been successful with assigning an AMD APU integrated GPU to a (ideally, Windows 10) guest VM?

Cheers.
 
Last edited:
According to this forum post, I will likely have to first enable the experimental Renoir APU drivers in the kernel upon boot, since they're only standard in kernel 5.5, using the command: amdgpu.exp_hw_support=1
Then I am thinking to find my iGPU bus & function numbers via the lspci command, to then run lspci -nks <bus/function>, so I can get the kernel module name, which I would then add to the /etc/modprobe.d/pve-blacklist.conf file. I guess I could even add the module name to the modprobe.blacklist=... kernel parameter in GRUB as well.
If you don't need access to the graphical display of the hypervisor (i.e. the PVE shell), you can skip the exp_hw_support stuff and just blacklist the module directly. The module will be amdgpu, no need for lspci.

Then, since it's technically just a PCIe device, you should be able to simply select the iGPU in the hardware tab of your VM upon adding a "PCI Device".

If it will actually work... well, that's hard to say. Only way to truly find out is to try, *technically* is should, but my experience with iGPUs is that it's... complicated.
 
  • Like
Reactions: NetworkingMicrobe
@Stefan_R thanks for the tip.

I got passthrough of my 4750G APU iGPU working somewhat to a W10 guest, and it works pretty well (albeit limited to ~24-30Hz refresh rates and no hardware acceleration) until installing AMD GPU drivers which messes everything up.

Below is an image of what happens to display output after installing AMD drivers. The mouse cursor is still fully responsive and moves around, just everything else is a pixelated mess.

If I disable the AMD Radeon Graphics device in Windows Device Manager (via RDP), everything is back to normal. Passing in a VBIOS makes no difference, so I am currently running without it. Sometimes the driver will just crash completely, so the screen will look normal again, and I'll get a Code 43 in Device Manager.

To get this to work, I had to turn on IOMMY, break up the IOMMU groups, and empty the EFI framebuffer upon boot:
Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on pcie_acs_override=downstream,multifunction video=efifb:off"

I also added the amdgpu kernel module to the pve-blacklist.conf file, and bound the iGPU (and associated integrated audio devcie) to VFIO upon boot:
Code:
# cat /etc/modprobe.d/vfio.conf 
options vfio-pci ids=1002:1636,1002:1637

Finally, I had to use a Q35 machine with SeaBIOS, and add the following line to the VM configuration file to pass through the iGPU and its audio device on the same multifunction device:
Code:
hostpci0: 03:00.0;03:00.1,pcie=1,x-vga=1

Some users on Reddit may have had similar issues with passing through AMD GPUs in general:
However the fixes of changing the VM vendor_id to something else (1234567890ab or KVMKVMKVM or random) didn't make a difference.

I'm at a loss of what to try next. I'd like to get the drivers working since that'd really help with performance. I'm not using the VM for gaming or anything intense like that, but right now even watching a Youtube video or moving a window around quickly has room for improvement.


1EMWDJo.png
 
Thats interesting! What are your IOMMU Groups? ie:
Code:
for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*}; printf 'IOMMU Group %s ' "$n"; lspci -nns "${d##*/}"; done;
What happens withouht acs_override?
I would like to passthrough the iGPU (4350G and Asrock A520M-ITX) to a macOS/Win10 VM. I'll try it when I get the time the next days.
 
Thats interesting! What are your IOMMU Groups? ie:
Code:
for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*}; printf 'IOMMU Group %s ' "$n"; lspci -nns "${d##*/}"; done;
What happens withouht acs_override?
I would like to passthrough the iGPU (4350G and Asrock A520M-ITX) to a macOS/Win10 VM. I'll try it when I get the time the next days.

Hi @pottproll,

WITHOUT the ACS override patch, my IOMMU groups look like this (notice how iGPU 03:00.0 and its audio device 03:00.1 are in the same group as a bunch of other stuff, preventing me from passing it through without causing system to hang):
Code:
IOMMU Group 0 00:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Renoir PCIe Dummy Host Bridge [1022:1632]
IOMMU Group 1 00:02.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Renoir PCIe Dummy Host Bridge [1022:1632]
IOMMU Group 1 00:02.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Renoir PCIe GPP Bridge [1022:1634]
IOMMU Group 1 00:02.3 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Renoir PCIe GPP Bridge [1022:1634]
IOMMU Group 1 01:00.0 Network controller [0280]: Intel Corporation Dual Band Wireless-AC 3168NGW [Stone Peak] [8086:24fb] (rev 10)
IOMMU Group 1 02:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 15)
IOMMU Group 2 00:08.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Renoir PCIe Dummy Host Bridge [1022:1632]
IOMMU Group 2 00:08.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Renoir Internal PCIe GPP Bridge to Bus [1022:1635]
IOMMU Group 2 00:08.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Renoir Internal PCIe GPP Bridge to Bus [1022:1635]
IOMMU Group 2 03:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Renoir [1002:1636] (rev d8)
IOMMU Group 2 03:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Device [1002:1637]
IOMMU Group 2 03:00.2 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 10h-1fh) Platform Security Processor [1022:15df]
IOMMU Group 2 03:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Renoir USB 3.1 [1022:1639]
IOMMU Group 2 03:00.4 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Renoir USB 3.1 [1022:1639]
IOMMU Group 2 04:00.0 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 81)
IOMMU Group 3 00:14.0 SMBus [0c05]: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller [1022:790b] (rev 51)
IOMMU Group 3 00:14.3 ISA bridge [0601]: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge [1022:790e] (rev 51)
IOMMU Group 4 00:18.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Renoir Device 24: Function 0 [1022:1448]
IOMMU Group 4 00:18.1 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Renoir Device 24: Function 1 [1022:1449]
IOMMU Group 4 00:18.2 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Renoir Device 24: Function 2 [1022:144a]
IOMMU Group 4 00:18.3 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Renoir Device 24: Function 3 [1022:144b]
IOMMU Group 4 00:18.4 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Renoir Device 24: Function 4 [1022:144c]
IOMMU Group 4 00:18.5 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Renoir Device 24: Function 5 [1022:144d]
IOMMU Group 4 00:18.6 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Renoir Device 24: Function 6 [1022:144e]
IOMMU Group 4 00:18.7 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Renoir Device 24: Function 7 [1022:144f]


WITH the ACS override (downstream and multifunction), my IOMMU groups look like this (iGPU is in its own group):
Code:
IOMMU Group 0 00:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Renoir PCIe Dummy Host Bridge [1022:1632]
IOMMU Group 10 02:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 15)
IOMMU Group 11 03:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Renoir [1002:1636] (rev d8)
IOMMU Group 12 03:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Device [1002:1637]
IOMMU Group 13 03:00.2 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 10h-1fh) Platform Security Processor [1022:15df]
IOMMU Group 14 03:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Renoir USB 3.1 [1022:1639]
IOMMU Group 15 03:00.4 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Renoir USB 3.1 [1022:1639]
IOMMU Group 16 04:00.0 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 81)
IOMMU Group 1 00:02.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Renoir PCIe Dummy Host Bridge [1022:1632]
IOMMU Group 2 00:02.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Renoir PCIe GPP Bridge [1022:1634]
IOMMU Group 3 00:02.3 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Renoir PCIe GPP Bridge [1022:1634]
IOMMU Group 4 00:08.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Renoir PCIe Dummy Host Bridge [1022:1632]
IOMMU Group 5 00:08.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Renoir Internal PCIe GPP Bridge to Bus [1022:1635]
IOMMU Group 6 00:08.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Renoir Internal PCIe GPP Bridge to Bus [1022:1635]
IOMMU Group 7 00:14.0 SMBus [0c05]: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller [1022:790b] (rev 51)
IOMMU Group 7 00:14.3 ISA bridge [0601]: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge [1022:790e] (rev 51)
IOMMU Group 8 00:18.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Renoir Device 24: Function 0 [1022:1448]
IOMMU Group 8 00:18.1 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Renoir Device 24: Function 1 [1022:1449]
IOMMU Group 8 00:18.2 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Renoir Device 24: Function 2 [1022:144a]
IOMMU Group 8 00:18.3 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Renoir Device 24: Function 3 [1022:144b]
IOMMU Group 8 00:18.4 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Renoir Device 24: Function 4 [1022:144c]
IOMMU Group 8 00:18.5 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Renoir Device 24: Function 5 [1022:144d]
IOMMU Group 8 00:18.6 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Renoir Device 24: Function 6 [1022:144e]
IOMMU Group 8 00:18.7 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Renoir Device 24: Function 7 [1022:144f]
IOMMU Group 9 01:00.0 Network controller [0280]: Intel Corporation Dual Band Wireless-AC 3168NGW [Stone Peak] [8086:24fb] (rev 10)
 
@NetworkingMicrobe
Can you try the official pve 5.11.7 kernel?

There is a change in 5.11.7 for iommu groups initialization.
And report back if your groups changed?
 
@NetworkingMicrobe
Can you try the official pve 5.11.7 kernel?

There is a change in 5.11.7 for iommu groups initialization.
And report back if your groups changed?

Thanks for the reply.
I don't think IOMMU groups are the issue per se, since the problem really seems to be with the AMD driver itself which Windows installs (everything works fine, although performance isn't ideal, until I install the driver). In any case, I've installed kernel 5.11.7-1 and my IOMMU groups look exactly the same as compared to current stable kernel release in PVE (see my comment above to see how they look with or without ACS override enabled). both with and without ACS override enabled.
 
Last edited:
... I've installed kernel 5.11.7-1 and my IOMMU groups look exactly the same both with and without ACS override enabled.
No offense, but that's really weird. pcie_acs_override=downstream(,multifunction) should break all devices (and functions) into separate groups and no sane motherboard/BIOS should do that by default. I would consider it a bug if the kernel always applied the override. Maybe I'm wrong, or maybe something went wrong?
 
No offense, but that's really weird. pcie_acs_override=downstream(,multifunction) should break all devices (and functions) into separate groups and no sane motherboard/BIOS should do that by default. I would consider it a bug if the kernel always applied the override. Maybe I'm wrong, or maybe something went wrong?
My bad, I mistyped (clearly haven't had my morning coffee yet). I meant to say my IOMMU groups look the same in 5.11.7-1 as they do in the current stable kernel in PVE. Of course, if I turn ACS override ON then I see many more groups (9 total), since each device gets its own, as per my post above. If it's OFF, then I get only 4 groups.
 
  • Like
Reactions: leesteken
Thanks for the update.
You are in fact the second one that confirms that the groups are shitty with an zen2 igpu cpu.
Dunno how much the board plays a role, a 520 is for sure not iommu group ideal, but so far it looks for me like the cpu is the bigger factor for the groups.

However, thanks for the try & cheers.
 
Thanks for the update.
You are in fact the second one that confirms that the groups are shitty with an zen2 igpu cpu.
Dunno how much the board plays a role, a 520 is for sure not iommu group ideal, but so far it looks for me like the cpu is the bigger factor for the groups.

However, thanks for the try & cheers.
I did get pass through working though, the problem arises when I install AMD drivers in the guest OS, then I see garbage output to VGA/HDMI/DP. And without drivers, very poor performance watching videos, moving windows around, etc.
 
I did get pass through working though, the problem arises when I install AMD drivers in the guest OS, then I see garbage output to VGA/HDMI/DP. And without drivers, very poor performance watching videos, moving windows around, etc.
This is further than anyone before got, I think. I also think that the driver (just as was or is the case with Intel?) does not expect an integrated GPU in such a configuration. It shares its memory with the host CPU, which is a complication with passthrough. I expect that the driver wants to negotiate memory address ranges and/or expects certain channels between the GPU and CPU, which are not available in the VM. Either the drivers need to support this configuration or QEMU needs to do something special to get this working.
 
  • Like
Reactions: NetworkingMicrobe
Finally, I had to use a Q35 machine with SeaBIOS
What CPU did you use? Host or the default KVM?
Maybe that helps with that:
I expect that the driver wants to negotiate memory address ranges and/or expects certain channels between the GPU and CPU
?
but so far it looks for me like the cpu is the bigger factor for the groups.
I dont think its the CPU. The other guy had much better groups with his x570 board after bios downgrade:



I'm currently changing some hardware so i wont be able to test it the next few days.
 
Last edited:
This is further than anyone before got, I think. I also think that the driver (just as was or is the case with Intel?) does not expect an integrated GPU in such a configuration. It shares its memory with the host CPU, which is a complication with passthrough. I expect that the driver wants to negotiate memory address ranges and/or expects certain channels between the GPU and CPU, which are not available in the VM. Either the drivers need to support this configuration or QEMU needs to do something special to get this working.
What CPU did you use? Host or the default KVM?
Maybe that helps with that:

?

I'm currently changing some hardware so i woult be able to test it the next few days.

Something else I noticed, although the screen looks very bad (again, see my previous comment above for a photo example), the mouse is still fully responsive and I see the cursor moving. Sometimes if I click enough randomly it will "crash" the driver and Code 43 will appear in Device Manager for the iGPU, meaning that the Windows Display Adapter drivers take over, and I can use the system, but again quite poor graphics performance.

@pottproll I used the "default" option for CPU (kvm64). When I tried with "host" option, I get the same issue, mouse is responsive but everything else is pixelated random garbage, Windows 10 does report "AMD Ryzen 7 PRO 4750G with Radeon Graphics" in Task Manager, though.

Some users on Reddit with other (dedicated) AMD graphics cards report similar issues (although no photos to confirm) with recent AMD drivers - perhaps it can detect that we are in a VM somehow... I tried changing Vendor IDs in CPU flag of QEMU to some random ones with no success. See references below:
Unfortunately, I tried the oldest available driver which supports APU iGPUs in general and still had the same issue. The "AMD Pro" drivers that some users use for their dedicated graphics cards don't support APUs either.

EDIT: I also tried changing the vendor_ID by adding the following line to my VM conf file, without any success:
Code:
args: -cpu 'host,hv_ipi,hv_relaxed,hv_reset,hv_runtime,hv_spinlocks=0x1fff,hv_stimer,hv_synic,hv_time,hv_vapic,hv_vendor_id=1234567890ab,hv_vpindex,kvm=off,+kvm_pv_eoi,+kvm_pv_unhalt'
 
Last edited:
Also I think motherboard is the deciding factor for IOMMU groups. In this case, I'm using the ASRock X300W DeskMini case/motherboard kit, there are no other PCIe slots, just a socket for the CPU and onboard audio. However, the AMD APUs shouls all look the same in terms of IOMMU groupings for onboard graphics, associated audio device, USB controller, and encryption controller.
 
What CPU did you use? Host or the default KVM?
Maybe that helps with that:

?

I dont think its the CPU. The other guy had much better groups with his x570 board after bios downgrade:



I'm currently changing some hardware so i wont be able to test it the next few days.
Correct, but the other guy has still almost a very identical mobo as me.
And i have with the x5800 and with all bioses perfect iommu groups. (There are 3 bios version, the initial, the first beta and the shortly updated final, which is 1:1 the last beta, just with an version change..., even disputed with asrock and they confirmed that they just increased the version number and nothing else) (disputed with them about agesa 1.2.0.0... and the x570 asrock rack boards, won't get agesa 1.2.0.0 anytime in the near future)...

However, in my opinion the real difference is the cpu, between him & me, and I don't exlude the mobo as a factor for the groups, as it's everywhere mentioned that it's both factors. I just think that especially the g series has a bigger impact as the mobo on this.
Or to say it different, sure the downgrade helped him, but a cpu change would probably too, who knows.
Or it's maybe just the mobo, that has some sort of tables for individual cpu's...

Whatever, it's an endless discussion xD


About the whole passthrough, it would be really cool if someone would make a passthrough driver, that just emulates the gpu and sends all the commands to the host. Imagine it, like with lxc device mounting.
Then we would not need to fight with passthrough or iommu groups. And from theory, this shouldn't be impossible. Probably not super safe.
 
  • Like
Reactions: NetworkingMicrobe
About the whole passthrough, it would be really cool if someone would make a passthrough driver, that just emulates the gpu and sends all the commands to the host. Imagine it, like with lxc device mounting.
Then we would not need to fight with passthrough or iommu groups. And from theory, this shouldn't be impossible. Probably not super safe.
Something like VirGL? It's been working for a while apparently, but it does not suit a hypervisor like Proxmox easily.
 
Last edited:
Something like VirGL? It's been working for a while apparently, but it does not suit a hypervisor like Proxmox easily.
Exactly, i guess virtio-gpu is virgl.
But somehow no one is working on that.

I could even imagine this as an replacement for mxgpu or nvidias overexpensive vgpu alternative. But yeah that's something we can dream of and will never be finished xD
 
I've been doing a bit more digging and decided to run GPU-Z tool to get some detailed GPU info, on both W10 native and W10 guest VM on PVE. Results below, I highlighted the differences... (this was while connected to VM via RDP, not sure if that has an effect).

Not sure why it says Bus is PCI when my VM config for passthroguh looks like this, clearly PCIe, along with Q35 machine type:
Code:
hostpci0: 03:00.0;03:00.1,pcie=1,x-vga=1

EDIT: and I forgot to highlight the "Revision" which is also different (00 vs D8)


VM.png native.png
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!