AMD Ryzen 5600G iGPU code 43 error

thermosiphonas

New Member
Dec 23, 2023
12
1
3
Hi, I am new to Proxmox and trying to passthrough the integrated GPU of an AMD Ryzen 5600G following this guide and this guide.
I can see the gpu in the VM (a windows 11 machine) but I get a code 43 and therefore I am unable to use it.

Can someone help me solve this?
 
Last edited:
Look at the end of the Second guide mentioned by you.

  • In random situations, I still get "error 43" when trying to initialize the GPU in the VM​

    Probably related to the "amd reset issue", that prevents the GPU from binding to a VM after it was used once. The only "real" solution for this is to restart the proxmox host after stopping a VM that used the GPU. :sad:

From what I know you need to restart your Hypervisor aka Proxmox.
 
Try Ubuntu Live installer ISO (without installin Ubuntu) to see if you have proper output on the physical display (to rule out Windows AMD driver issues). Maybe search the forum as this has been asked before.
 
Tried it but still no luck. I get a black screen on my display and no GPU detected.

I got a "pci_hp_register failed with error -16" message with the VM refusing to boot. Power cycled the whole server and then the Ubuntu Live Installer VM booted but with no GPU detected.
Could it possibly be the famous AMD reset bug? If so, maybe I am on the right path but something more needs to be done?
 
Last edited:
I just recently managed to get igpu passtrough working on a 5600g to a windows 10 VM.
Have also been stuck at the code 43 for a long time but adding a ROM file fixed the issue for me.
Now the GPU works fine but i cant reboot only the VM, have to reboot the whole host because of AMD reset bug i guess.
Vendor Reset has not helped so far but i don't mind rebooting the whole host when necessary

Good luck! took me a while!
 
  • Like
Reactions: lausbub
@LANnerd
oh that would be really good news. What really solved your error with code 43 then? Which guide did you follow?
Best regards Lausbub
 
Thank you very much. I will test it tomorrow. I have already tested this with several ROM files, but with error code 43.
can you tell me which grub command line settings you are using?
UEFI or BIOS in the guest system?

Sorry for all the questions, but it feels like I've already wasted 40 hours on this. It would really help me.
 
This is what the edited line in grub looks like at my setup:

GRUB_CMDLINE_LINUX_DEFAULT="quiet, pcie_acs_override=downstream,multifunction"

I believe i only used the ACS overide, and did that before all this to pass trough a SAS controller from the second x16 slot to a TrueNAS VM.

This ROM file is to be used with a UEFI VM. Where i found it there was also a BIOS version but have not tried that. This should work with UEFI.

What could also be important is that my primary GPU is a 3060ti in x16 slot 1 passed trough to a desktop VM. because its the primary GPU the video output of proxmox will use this 3060ti. This makes sure proxmox does not try to use the iGPU at boot.
Now i can pass trough the iGPU to a jellyfin server VM. Have not tried to pass trough any audio.
 
Last edited:
Also, i just remembered i set the iGPU in the host bios to " UMA specified" with 2gb of memmory set.
 
For me the situation is as follows:

I added the vendor id (0x1638 for Ryzen 5 5600G iGPU) to device-db.h & udev-rule in vendor-reset, so it resets like AMD_NAVI10, and this made the card show up whenever I start qemu!

Without vendor-reset, qemu will refuse to start because of "pci_hp_register failed with error -16" AND the first time you start it after host reboot it will/can also not work inside VM no matter what, but other people reported that this is the time where it should work naturally and can work at all for that matter (for me in this instance, all VFIO PCIe cards are simply absent/missing, and the VM is bugged out and reacts like 100x slower).

It would seem that for me (or maybe due to updates nowadays in general), the card is already reset when Qemu starts. And without vendor-reset this vanilla reset of Linux causes it to malfunction in the guest, and then consecutively the host. Resulting in the loathed "pci_hp_register error" on second invocation of qemu after reboot.

So thanks to vendor-reset now apparently also working with Ryzen 5 5600G iGPU, the card showed up for the first time ever, but with Code 43 sadly. I then incorporated UEFI boot into my Qemu command, which requires you to reinstall Windows. It seemed quite an odd thing to try to me, despite being mentioned here and there by people such as LANnerd in this thread, because there are not any real instructions for it anywhere in all the tutorials. And all the examples don't use UEFI. Anyway, now I also had to install Adrenaline with the driver again. The card then showed up without Code 43 in Device Manager. However naturally I did this while using qxl-vga. So when I started Looking Glass, unfortunately it seemed to utilize this virtual card. This is why I rebooted the VM for the 10th time without rebooting host, only to remove the qxl-vga device. Sadly then the card showed up as Code 43 again, and I had to use Remote Desktop to work with the machine. When using qxl-vga again out of desperation, I was able to make it load the driver in Windows a second time without errors, only to be faced with the same issue. After trying various things a dozen times in a row, such as trying exactly what I did before, adding a dedicated pcie-root-port or using romfile=vgabios-cezanne-uefi.bin (which I didn't do before), plus rebooting the host and hoping the first time I start qemu it will be different (but it never was) ... I came to the conclusion that it seems too random and unlikely to happen for the Adrenaline driver to fix the Code 43.

I am not sure what is going on, but I think the Adrenaline installer is performing some sort of magic reset procedure, which fixes Code 43, but often it doesn't and it demands you restart the PC instead.

Before using UEFI, I was getting those weird warning in Qemu that came over STDOUT (not STDERR) so you could not see them in the console with -daemonize .
qemu-system-x86_64: VFIO_MAP_DMA failed: Bad address
qemu-system-x86_64: vfio_container_dma_map(0x56141ecabab0, 0x380000000000, 0x10000000, 0x7ef58c000000) = -14 (Bad address)
qemu-system-x86_64: VFIO_MAP_DMA failed: Bad address
qemu-system-x86_64: vfio_container_dma_map(0x56141ecabab0, 0x380010000000, 0x4000, 0x7ef5b4400000) = -14 (Bad address)
qemu-system-x86_64: VFIO_MAP_DMA failed: Bad address
qemu-system-x86_64: vfio_container_dma_map(0x56141ecabab0, 0x380010005000, 0x1fb000, 0x7ef5b4405000) = -14 (Bad address)
qemu-system-x86_64: VFIO_MAP_DMA failed: Bad address
qemu-system-x86_64: vfio_container_dma_map(0x56141ecabab0, 0xfe900000, 0x42000, 0x7ef6b6a99000) = -14 (Bad address)
qemu-system-x86_64: VFIO_MAP_DMA failed: Bad address
qemu-system-x86_64: vfio_container_dma_map(0x56141ecabab0, 0xfe943000, 0x3d000, 0x7ef6b6adc000) = -14 (Bad address)
qemu-system-x86_64: VFIO_MAP_DMA failed: Bad address
qemu-system-x86_64: vfio_container_dma_map(0x56141ecabab0, 0xfe988000, 0x4000, 0x7ef6cc007000) = -14 (Bad address)
Those errors however disappear 100% of the time if I use UEFI with Qemu... I hope that's not because UEFI just breaks the logging somehow.


Here are all the detailed steps I did to make it work, considering that you can take the normal tutorials as reference to make sense of it and understand what the procedure actually is for. I put those things that I believe to be essential in bold and the probably redundant stuff in italics and what I am unsure about in normal formatting.
Grub CMD: amd_iommu=on iommu=pt rd.driver.pre=vfio-pci vfio-pci.ids=1002:1638,1002:1637,1022:15df,1022:1639,1022:15e3,1022:1635,1022:1632 kvm.ignore_msrs=1 video=efifb:off video=vesafb:off pcie_acs_override=downstream,multifunction
Note: ACS override let's you "cherry pick" GPU from the IOMMU group, but causes VM to be able to read host memory! video= not needed if GPU not used during boot

After boot remove unwanted drivers:
modprobe vendor_reset
rmmod amdgpu
rmmod snd_hda_intel
rmmod snd_hda_codec_hdmi

And reset drivers to vfio-pci (so you don't have to blacklist the entire driver) + fix power states:

# My GPU is on iommu group 5 along with TPM module, sound card, mystery USB controller that has no outlets and dummy/empty PCIe root controller = don't need (not the same for everyone)
IOMMU_GROUP=5
for i in $(command ls /sys/kernel/iommu_groups/${IOMMU_GROUP}/devices); do
echo "$i" > /sys/bus/pci/devices/${i}/driver/unbind
echo "vfio-pci" > /sys/bus/pci/devices/${i}/driver_override
echo "$i" > /sys/bus/pci/drivers/vfio-pci/bind

# TODO: Those two hurt power draw, needs to be put in VM script later and toggled only when VM is on
echo "0" > /sys/bus/pci/devices/${i}/d3cold_allowed
echo on > /sys/bus/pci/devices/${i}/power/control
done

In vendor-reset add the following line to src/device-db.h (use lspci -nnk to get this number for GPU, e.g. 1002:1638) after #define _AMD_NAVI10(op) \:
{PCI_VENDOR_ID_ATI, 0x1638, op, DEVICE_INFO(AMD_NAVI10)}, \

Then you can also add your vendor id to 99-vendor-reset.rules, which you have to copy to /etc/udev/rules.d/ by hand... or you can do the same by hand in vm script:
modprobe vendor_reset
mygpuid=0000:0e:00.0; echo "device_specific" > /sys/bus/pci/devices/${mygpuid}/reset_method


The last thing is my qemu command:
qemu-system-x86_64 \
-enable-kvm \
-cpu host,kvm=on,l3-cache=on,hv_relaxed,hv_vapic,hv_time,hv_spinlocks=0x1fff,hv_vendor_id=hv_dummy \
-smp 3 \
-m 4G \
-machine q35,accel=kvm,kernel_irqchip=on \
-net tap,script=no,ifname=vm5,vnet_hdr=on -net nic,macaddr=52:13:37:2A:F1:75,model=e1000 \
-device ivshmem-plain,memdev=ivshmem,bus=pcie.0 \
-object memory-backend-file,id=ivshmem,share=on,mem-path=/dev/shm/looking-glass,size=64M \

-monitor telnet:127.0.0.1:4448,server,nowait \
-device virtio-mouse-pci \
-device virtio-keyboard-pci \
-device ich9-intel-hda \
-device hda-output \

-drive if=pflash,format=raw,readonly=on,file=/usr/share/edk2/x64/OVMF_CODE.4m.fd \
-drive if=pflash,format=raw,file=/home/myuser/STORE/VM/win10.OVMF_VARS.4m.fd \
-spice port=5900,addr=127.0.0.1,disable-ticketing=on,ipv4=on \

-device virtio-serial-pci \
-chardev spicevmc,id=vdagent,name=vdagent \
-device virtserialport,chardev=vdagent,name=com.redhat.spice.0 \
-drive index=0,file=/home/myuser/STORE/VM/win10_uefi.img,if=ide,cache=writeback,format=raw \
-vga none \
-serial file:/tmp/win10kvm.log \
-D /tmp/win10kvm.log2 \

-device pcie-root-port,id=root_port1,chassis=0,slot=0,bus=pcie.0,hotplug=on,multifunction=on \
-device vfio-pci,host=0e:00.0,bus=root_port1,addr=00.0,multifunction=on,x-vga=on,romfile=vgabios-cezanne-uefi.bin \
-device vfio-pci,host=0e:00.1,bus=root_port1,addr=00.1 \

-device vfio-pci,host=0e:00.2,bus=root_port1,addr=00.2 \
-device vfio-pci,host=0e:00.3,bus=root_port1,addr=00.3 \
-device vfio-pci,host=0e:00.4,bus=root_port1,addr=00.4 \
-device vfio-pci,host=0e:00.6,bus=root_port1,addr=00.6 \

-daemonize

The two lines with pflash are to use UEFI (don't use -bios switch). The file in the second line "should" be writable, that's why I copied it over to a user folder... however I think that's probably overcomplicating it. When it first "worked", I was not using root port nor adding the entire iommu group and I only did this out of desperation basically. The normal switches without root port are:

-device vfio-pci,host=0e:00.0,multifunction=on,x-vga=on,romfile=vgabios-cezanne-uefi.bin \
-device vfio-pci,host=0e:00.1 \

Also what I did was add vfio_pci_core vfio_iommu_type1 vfio_pci to MODULES in /etc/mkinitcpio.conf & mkinitcpio -P ... however I don't think you really need to do this ... like listing [B]vfio-pci.ids=[/B] in kernel command line, I think this is only relevant if you want to forward the same GPU that you are using during boot process. I think the only cmd you actually need in Grub is amd_iommu=on iommu=pt , because this enables iommu in the case that you cannot enable it in the BIOS by hand (such as in my case, I have a BIOS with practically zero options). But if your BIOS allows you to do this, you might not even need to add anything to kernel CMD!

I think it is possible that lots of unsupported GPUs might work with vendor-reset, if you simply add the vendor IDs to either navi10, polaris10, vega10 or vega20. Because I had a brief look at the Linux AMD drivers, and there is an explanation in it that there are only a couple of reset strategies that the driver also uses. So my guess is that most of these strategies are covered by those 4 options in vendor-reset... according to the explanation "BACO" reset is only used for dedicated GPUs so not iGPUs, and NAVI10 is the only one without "BACO".

There are a couple of people that have reported 5600G or similar iGPUs to work with VFIO, not only in this thread. But none of them have said or explicitly mentioned that it still worked also after restarting Qemu, which is vital for it to use it at all, and I now presented a solution for. So perhaps at this point it is only a small detail missing so that VFIO can work properly for 5600G and possibly tons of other iGPUs or GPUs as well.
 
Last edited:
By switching to the Niemez driver and only in combination with steps taken in my previous post (such as romfile=vgabios-cezanne-uefi.bin, UEFI, vendor-reset, etc.) I entirely got rid of Code 43 and the device loads error-free!

EVERYTHING WORKS NOW! NO RESET BUG, ETC.

Make sure to use Virtual-Display-Driver and NOT plug in a real monitor, to avoid another severe bug + VDD is much better anyway
, since it also allows 165Hz and such.

Here is how I installed the Niemez driver:

WHQL-R-ID-Software-Hybrid-24.3.1-Win10-Win11-PolarisVegaNavi-Nebula.exe
Options:
Standard Driver
RDNA 24.3.1
SDI
<do nothing>
AMD Default
Long
Enterprise
Lite
Stock (?)
Yes interface
1. Classic UI V5.5/23.40
place ccc2_install_v5.5_23.40.exe in the AMD-ID folder

One more thing: I was not 100% sure during testing if it was abnormal that I only got "serial0" via VNC output of Qemu and never any output. So now that it is all working, I can tell you that it is not abnormal at all. During testing, the only other way to connect to the VM at all is via Rdesktop, because you have to use -vga none and you can't have other display devices for Looking Glass to work (which only loads late as Windows service). You should remove -vnc too because it can't work. You can also plug in a real monitor to see the video output, but this can produce more bugs and crashes so I would avoid it.
 
Last edited:
I have already edited my post above to reflect this additional bug I discovered plus mentioned solution to it. If you want to understand and avoid this bug, read this post. Otherwise just use VDD as suggested.

The bug is this: When I powered the VM on the second time AND I had a VGA/cable connected, then Qemu would not really boot and almost crash my PC. And sometimes my PC even powered off instantly (not caused by overheating, as if RAM corrupt / BIOS defective or something).

However, when I don't have a display cable connected to my motherboard then this never happens. I have rebooted and tested this dozens of times over Remote Desktop the last days.

So when I power on the VM, I have to physically plug out the cable before I do that, wait a minute for Windows to boot and then I can plug it in again and Looking Glass and everything works normal.

In dmesg I see lots of these messages (sometimes only few or none) when I start Qemu, and the mouse hangs for a moment:
[ 516.627501] AMD-Vi: Completion-Wait loop timed out
[ 516.753167] AMD-Vi: Completion-Wait loop timed out
[ 516.927165] AMD-Vi: Command buffer timeout
[ 517.100923] AMD-Vi: Command buffer timeout

But then when no cable was plugged in it just stops, and works normal when I plug cable in again.

I then discovered a project named RadeonResetBugFix , which seemed promising at first, but then it gave me the impression that it doesn't really concern the same bug, and it probably doesn't really do anything major on a hardware level like a driver would and thus wouldn't really be helpful the same way as the stuff I already tried manually ... not sure how true this actually is, I tried it a few times and it didn't work and I found a much better final solution in the process:

In this Reddit thread I learned that you can install Virtual-Display-Driver, which removes the need to plug a physical cable into your card, which (at least in my case) will also get rid of the above mentioned bug, and you don't need to unplug anything as a workaround anymore. It can even run at 165Hz and works just the same way with Looking Glass. In fact they want to ship Looking Glass in the next version with this driver integrated ... so I hope they will do this fast because it takes a lot of tapping in the darkness and fighting ugly bugs to figure out that this thing exists somewhere and solves most of your problems.

TL;DR: Use Virtual-Display-Driver don't try to plug a monitor into your forwarded GPU, because it could bug out Qemu.
 
Last edited:
If this really is a working solution then it would significantly enhance the 5600g. Thank you for your work for now. I've already spent a lot of time unsuccessfully trying to find a solution myself. I still have a few questions. How did you extract the vBios? and what was really the solution for error 43 in the end? I have a windows machine very similar to yours, but I get the error code 43 which could not be fixed until the end. The virtual monitor or the Niemez driver?
 
Thanks. You can find the UEFI VBIOS in this thread and all the info you asked for. I did not extract it myself, I think you need to do this on a Windows installation somehow.

To fix Code 43 you need the mentioned Niemez driver + mentioned options (which are fairly standard, I picked them basically at random first try it worked) AND you need to load romfile=vgabios-cezanne-uefi.bin AND as mentioned without modified vendor-reset for me the card never showed up in the first place .. so you also need vendor-reset probably. Don't forget that you need to load Qemu with UEFI code as shown, which requires you to also reinstall Windows.

Then use Virtual-Display-Driver and never plug in physical output and it all works flawlessly.

There are a few more details like -machine q35, not sure if this is really needed.
 
Last edited:
Are you really talking about the same issue that the vendor-reset supposed to fix? It's not about the Error 43, it's about that you can't start a VM with passthrough iGPU multiple times. I don't read about this fixed!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!