Issues with AMD RX 6600 XT Passthrough

  • Thread starter Thread starter Deleted member 171039
  • Start date Start date
D

Deleted member 171039

Guest
Hello everyone!

I followed the instructions in the wiki for GPU passthrough. I can successfully add the PCI device which is recognized by the guest VM (Win 10) without the infamous error 43 and installed the adrenaline drivers smoothly. However, I experience some problems:

-Poor desktop performance
-Can't connect via Parsec due to error 15000, which is according to parsec support a driver error
-Can't launch Godot, which throws an error regarding something like "old OpenGL driver".

I read something about dumping the GPU Bios or something like that, but it's unclear to me if is relative to my problem.
I read something about a reset bug that afflicts my GPU, too.

My VM config is:

12 GB balloon=0 RAM
6 numa=1 Processor
OVMF Bios
VirtIO GPU Display
Q35 Machine
VirtIO SCSI Single Controller
PCI Device x-vga=1

Am i doing something wrong?
 
Set Display to none (vga=none) and don't use Primary GPU (x-vga=0). Primary GPU is mostly useful for NVidia GPUs but not AMD.
Share your VM configuration file and tell us your Proxmox host hardware if the above does not help.
 
Unfortunately I got the same result.

My VM Config:
agent: 1
args: -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=NV43FIX,kvm=off'
balloon: 0
bios: ovmf
boot: order=scsi0;ide2;net0;ide0
cores: 6
cpu: host,hidden=1,flags=+pcid
efidisk0: VMs:vm-106-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
hostpci0: 0000:0c:00
ide0: local:iso/virtio-win-0.1.221.iso,media=cdrom,size=519030K
ide2: local:iso/W10X64.22H2.PRO.ENU.AUG2022.ISO,media=cdrom,size=4578496K
machine: pc-q35-7.0
memory: 12288
meta: creation-qemu=7.0.0,ctime=1665656894
name: Gaming
net0: virtio=76:6E:3D:1A:98:9D,bridge=vmbr0,firewall=1
numa: 1
ostype: win10
scsi0: VMs:vm-106-disk-1,cache=writeback,discard=on,iothread=1,size=256G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=68f64887-af4c-4fc2-857e-263e8f84ce5c
sockets: 1
tpmstate0: VMs:vm-106-disk-2,size=4M,version=v2.0
vga: none
vmgenid: 79344161-9260-4d53-b17b-5c37b8730745

My Host:
MOBO: GIGABYTE AB350N-Gaming WIFI

CPU: AMD Ryzen 5 3600 6-Core Processor

GPU: MSI Radeon RX 6600 XT Gaming X 8 GB GDDR6 Graphics Card​

RAM: Ripjaws V DDR4-3200 CL16-18-18-38 1.35V 32GB (2x16GB)
 
My VM Config:
args: -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=NV43FIX,kvm=off'
Don't use that. It is a work-around for NVidia Windows drivers.

Are you seeing this on a physical display connected to the GPU? Remote desktop might no be accelerated by the GPU. Sometimes a GPU is not activated because there are no displays (or "fake HDMI-plugs") connected. Maybe the AMD Windows drivers did not install properly for you, if you get the old Microsoft software OpenGL version?
 
Sorry for the very late reply. The problem persists.
However, I noticed that it's not consistent. 1 time out of 10 the GPU works flawlessly.
The HDMI cable is plugged in.
Could this be relative to this reset bug I read about my GPU?
Or maybe about the vbios that I should dump? If that's the case, I should I do it?
 
Could this be relative to this reset bug I read about my GPU?
It should not have a reset bug, but you never know.
Or maybe about the vbios that I should dump? If that's the case, I should I do it?
It won't hurt. If you can boot the system with another GPU then you can dump the ROM of the 6600XT. Don't dump the ROM when the 6600XT is used during boot.

Try using pc-q35-6.2. I recently experienced AMD GPU drivers on Windows 10 22H2 having issues (code 43) with version 7.1 and version 6.2 worked much better.
And enable the PCI-Express option in the GUI as well because the drivers might expect that.
If the 6600XT is used during boot, you might need this work-around with Proxmox kernel version 5.15.
 
Right now my grub file shows this:

Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt pcie_acs_override=downstream,multifunction nofb nomodeset video=vesafb:off,efifb:off"

Should I just replace
Code:
video=vesafb:off
?
 
Last edited by a moderator:
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt pcie_acs_override=downstream,multifunction nofb nomodeset video=vesafb:off,efifb:off"
amd_iommu=on is useless as it is on by default. I have never seen anyone needing iommu=pt.
If you don't really need pcie_acs_override=downstream,multifunction, don't use it as it breaks security isolation between VMs and/or the Proxmox host.
video=vesafb:off,efifb:off is very old and no longer valid. nofb nomodeset video=vesafb:off video=efifb:off no longer works since Proxmox kernel version 5.15. If you want to passthrough a GPU used during boot of the system, use initcall_blacklist=sysfb_init.
Should I just replace video=vesafb:off?
GRUB_CMDLINE_LINUX_DEFAULT="quiet initcall_blacklist=sysfb_init" will probably work for you. Check with cat /proc/cmdline afterwards to see if your changes are active.
Maybe you need the pcie_acs_override=... because of the old B350 chipset and motherboard BIOS has poor IOMMU groups? Or maybe you need a BIOS update but I don't know which BIOS version for your motherboard works well with passthrough (and some won't work at all). Let's not take the change of breaking something until we need to.
 
Edited grub to look like this
Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet initcall_blacklist=sysfb_init

Did an update-initramfs -u then rebooted.
Still getting error 43.

cat /proc/cmdline outputs this
Code:
initrd=\EFI\proxmox\5.15.85-1-pve\initrd.img-5.15.85-1-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs
 
cat /proc/cmdline outputs this
Code:
initrd=\EFI\proxmox\5.15.85-1-pve\initrd.img-5.15.85-1-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs
This shows that none of your changes (ever) to /etc/default/grub have any effect. Your Proxmox is using systemd-boot instead of GRUB. You need to add it to the single (and only) line of /etc/kernel/cmdline. Don't add any other lines to the file and don't put empty lines before the single line or your system might not boot! Check again with cat /proc/cmdline after a reboot.
 
This shows that none of your changes (ever) to /etc/default/grub have any effect. Your Proxmox is using systemd-boot instead of GRUB. You need to add it to the single (and only) line of /etc/kernel/cmdline. Don't add any other lines to the file and don't put empty lines before the single line or your system might not boot! Check again with cat /proc/cmdline after a reboot.
That worked! Thank you so much!

While where are at it, is there any other thing I have to check to make sure everything is good? Or maybe some optimization I could do?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!