[WORKING] Amd RX 9070xt support

Hiroi

Member
Oct 15, 2021
17
3
8
21
Hi, I know this is kind of a long shot. But does anyone know when support for the rx 9070xt would come? I did compile the 6.13 kernel and got the linux firmware but it can't be passed through.

This is pretty low priority I would imagine. But heads up would be appreciated.

Thank you!
 
  • Like
Reactions: Bigrob55
I actually followed those steps I did compile the latest kernel manually, and I did get the linux firmware files from git. There is no way to get mesa 25 working on proxmox right now, I did try to compile it from source but proxmox's versions of the tools required are too old, and you get stuck in dependency hell.
I still couldn't get the GPU to ge recognized.
 
I hadn't thought about installing the requirements on Proxmox itself, I went ahead and did all the firmware/mesa/etc on the VM that I was wanting to pass the card to. Proxmox does see the card, so I assumed I didn't need to do anything there, but maybe I'm wrong?

I'm on kernel 6.11 on Proxmox, for what it's worth with a regular 9070.
 
I hadn't thought about installing the requirements on Proxmox itself, I went ahead and did all the firmware/mesa/etc on the VM that I was wanting to pass the card to. Proxmox does see the card, so I assumed I didn't need to do anything there, but maybe I'm wrong?

I'm on kernel 6.11 on Proxmox, for what it's worth with a regular 9070.
Do you mean it worked passing through to a linux vm with all the firmware pre installed?
 
No, I haven't gotten it to pass-through yet. I was able to get the VM to boot though, with the GPU "passed through", I just got no video, but could SSH in etc.

I've been waiting to try and find a ROM file for my version of the 9070, but no luck yet. I might try doing the firmware/mesa stuff on Proxmox. Again, I just assumed those steps needed to be done on the endpoint VM rather than the host.
 
I think proxmox can't pass it through correctly since it can't detect it properly. Even though i assumed proxmox would just pass through the device.

People did get it to work in VMs using libvirt so i tend to think the problem is with proxmox here.
 
I found a way to use the GPU! It's not perfect, but it survives reboots and shutdowns of the VMs

Basically, I used this to unbind the gpu: https://forum.level1techs.com/t/vfio-pass-through-working-on-9070xt/227194.
NOTE: For this to work you need to remove amdgpu from /etc/modprobe.d/blacklist.conf and also remove it from /etc/modprobe.d/vfio.conf.
This will make proxmox use the amdgpu driver, which you can then correctly unbind and pass it through to a VM.

This works well if you need to reboot the VM, if you need it to work for another VM or after vm shutdown, use the following commands:

Bash:
#Unbind vfio driver:
echo "0000:2f:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind
#Bind the amdgpu driver:
echo "0000:2f:00.0" > /sys/bus/pci/drivers/amdgpu/bind
#Use the unbind script from the provided link

Don't forget to change the IDs for the PCI!

It's a bit janky, but it can be put into a script as a temporary workaround, I will update this when I will make it.
 
I found a way to use the GPU! It's not perfect, but it survives reboots and shutdowns of the VMs

Basically, I used this to unbind the gpu: https://forum.level1techs.com/t/vfio-pass-through-working-on-9070xt/227194.
NOTE: For this to work you need to remove amdgpu from /etc/modprobe.d/blacklist.conf and also remove it from /etc/modprobe.d/vfio.conf.
This will make proxmox use the amdgpu driver, which you can then correctly unbind and pass it through to a VM.

This works well if you need to reboot the VM, if you need it to work for another VM or after vm shutdown, use the following commands:

Bash:
#Unbind vfio driver:
echo "0000:2f:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind
#Bind the amdgpu driver:
echo "0000:2f:00.0" > /sys/bus/pci/drivers/amdgpu/bind
#Use the unbind script from the provided link

Don't forget to change the IDs for the PCI!

It's a bit janky, but it can be put into a script as a temporary workaround, I will update this when I will make it.

Holy shit, that's progress! Great find!

The video is coming through the DP on the GPU, but it's not recognizing the GPU? I haven't tested a game yet, trying a couple reboots first.

1741652097468.png


/edit
I've tested 3 reboots of the VM, each time running the `unbind` script and the VM boots normally! I'll probably look into adding that script as a hookscript to run before the VM starts and continue testing.

POE2 ran flawlessly at 90-100fps on a crowded screen. This is fantastic!
 
Last edited:
I have made the following script in order to make the GPU work well for every time I want to start my VMs. It will give an error because it can't reset the gpu, but you can ignore that since we are unbinding and binding it manually.
Bash:
#!/bin/bash

# Unbind the GPU from the vfio-pci driver (used for passthrough)
echo "0000:2f:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind 2>/dev/null
sleep 2  # Wait to ensure the operation completes

# Bind the GPU back to the amdgpu driver for use by the host
echo "0000:2f:00.0" > /sys/bus/pci/drivers/amdgpu/bind 2>/dev/null
sleep 2  # Allow time for the driver to initialize

# Stop the display manager (GDM) to free up the GPU
systemctl stop gdm
sleep 2  # Pause to ensure GDM is fully stopped

# Unbind the GPU from the amdgpu driver before resizing resources
echo "0000:2f:00.0" > /sys/bus/pci/drivers/amdgpu/unbind
sleep 2  # Short wait to ensure the unbind operation completes

# Resize the GPU's BAR2 memory region (useful for PCI passthrough)
echo 3 > /sys/bus/pci/devices/0000:2f:00.0/resource2_resize
sleep 2  # Give the system time to apply the change

# Start the Proxmox virtual machine with ID 106
/usr/sbin/qm start 106
echo "VM Started!"  # Print a message confirming that the VM has started

All you need to do is change the VMID, this is for one of my VMs.
Since I start my VMs and PC remotely from my phone, this integrates pretty well with my setup. You might want to look into running the script at VM startup if it works better for you.

I will update the title to [WORKING] but I won't put this on Solved since it's basically a temporary workaround until the proxmox team implements this natively.
 
  • Like
Reactions: Degrowth
I went a slightly different route, as I haven't messed with the BAR2 stuff yet.

This is all I have in my script to run before starting the VM.
Code:
#!/bin/bash

echo "0000:03:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind
sleep 2
echo "0000:03:00.0" > /sys/bus/pci/drivers/amdgpu/bind
sleep 2

Appreciate the help on this, I'm stoked to have my 9070 (non-xt :( ) working!
 
I had some trouble to get it to work with my configuration. The solution I found (after a lot of restarts and trial-and-error) to be reliable is to unbind the gpu from vfio-pci and bind it to amd after the VM (with GPU passed-through) stopped.

If I run that before the VM starts, the GPU gets stuck and only a system reboot would help.

What I noticed as well is that I don't have problems using 256MB for BAR2. Previously it seems to only work with Windows if it is only 8MB, but it is not the case anymore (at least for me).

I enabled REBAR in BIOS and GPU-Z in my WIndows 11 VM shows that REBAR is enabled using 16GB for BAR0 and 256MB for BAR2.

Everything is working so far and I successfully tested my config with multiple reboots of the Windows VM.

It might be that I had trouble because of my iGPU (Intel) being available to proxmox at first, but as soon as my media VM starts it gets passed-through and proxmox only has the 9070 XT amdgpu left to use.

Here are my steps which results into a reliable 9070 XT passthrough:

Update Kernel
I updated my kernel to 6.11 (latest kernel version for proxmox at the moment). In order to do that go to the linked post and follow the steps: Opt-in Linux 6.11 Kernel for Proxmox VE 8 available on test & no-subscription


Edit /etc/modules file
Since I had an AMD GPU before this upgrade (old VEGA 64) I removed kernel modules which were not needed anymore. The only kernel modules I have now are the following in /etc/modules

code_language.shell:
# /etc/modules: kernel modules to load at boot time.#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# Parameters can be specified after the module name.

vfio
vfio_iommu_type1
vfio_pci

If you had to change your /etc/modules file, you need to run the following:
code_language.shell:
update-initramfs -u -k all


Reboot proxmox
If you updated the kernel or changed the /etc/modules file you need to perform a reboot.


Download and use the GPU ROM file
The following example is for my PowerColor RX 9070 XT Reaper. Head over to TechPowerUp (for XT) or TechPowerUp (for non-XT) and copy the download link of your GPU models ROM file.

SSH into your proxmox and cd into the kvm folder:
code_language.shell:
cd /usr/share/kvm

Download the copied ROM file of your GPU model into the folder:
code_language.shell:
# this would be for my PowerColor RX 9070 XT Reaper model
# wget https://www.techpowerup.com/vgabios/274342/Powercolor.RX9070XT.16384.241204_1.rom
wget REPLACE_YOUR_COPIED_DOWNLOAD_URL_HERE

Rename the downloaded file and name it 9070xt.rom or 9070.rom.
code_language.shell:
#example for my PowerColor RX 9070 XT Reaper model
mv Powercolor.RX9070XT.16384.241204_1.rom 9070xt.rom

Edit your vm .conf file (located in /etc/pve/qemu-server/). The name of the .conf file is the ID of your VM. Mine is for example 200.
code_language.shell:
nano /etc/pve/qemu-server/200.conf

Add your ROM file to the configuration. Since mine is a 9070 XT I will use the renamed .rom filename "9070xt.rom".
Locate your hostpci entry of your GPU and add "pcie=1,x-vga=1,romfile=9070xt.rom". A complete example of this line for my configuration with the GPU being pci address 0000:03:00 is below:
code_language.shell:
hostpci1: 0000:03:00,pcie=1,x-vga=1,romfile=9070xt.rom


Create / Edit and add your hookup script
I created a hookup script for the GPU use and have the following script located at /var/lib/vz/snippets: (I called the script vmGPU.sh)

code_language.shell:
phase="$2"

if [ "$phase" == "post-stop" ]; then
    # Unbind gpu from vfio-pci
    sleep 5
    echo "0000:03:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind 2>/dev/null
    sleep 2

    # Bind amdgpu
    echo "0000:03:00.0" > /sys/bus/pci/drivers/amdgpu/bind 2>/dev/null
    sleep 2
fi

Now we need to add it to our VM configuration. As described in the previous step head to your .conf file of your VM (mine is: /etc/pve/qemu-server/200.conf) and edit it to add the following line:
code_language.shell:
hookscript: local:snippets/vmGPU.sh

I hope this helps someone else to get the 9070 (XT / non-XT) working. THANKS for the previous comments on this thread otherwise I would'nt have made it to work!
 
Last edited:
I had some trouble to get it to work with my configuration. The solution I found (after a lot of restarts and trial-and-error) to be reliable is to unbind the gpu from vfio-pci and bind it to amd after the VM (with GPU passed-through) stopped.

If I run that before the VM starts, the GPU gets stuck and only a system reboot would help.

What I noticed as well is that I don't have problems using 256MB for BAR2. Previously it seems to only work with Windows if it is only 8MB, but it is not the case anymore (at least for me).



Everything is working so far and I successfully tested my config with multiple reboots of the Windows VM.

It might be that I had trouble because of my iGPU (Intel) being available to proxmox at first, but as soon as my media VM starts it gets passed-through and proxmox only has the 9070 XT amdgpu left to use.

Here are my steps which results into a reliable 9070 XT passthrough:

Update Kernel
I updated my kernel to 6.11 (latest kernel version for proxmox at the moment). In order to do that go to the linked post and follow the steps: Opt-in Linux 6.11 Kernel for Proxmox VE 8 available on test & no-subscription


Edit /etc/modules file
Since I had an AMD GPU before this upgrade (old VEGA 64) I removed kernel modules which were not needed anymore. The only kernel modules I have now are the following in /etc/modules

code_language.shell:
# /etc/modules: kernel modules to load at boot time.#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# Parameters can be specified after the module name.

vfio
vfio_iommu_type1
vfio_pci

If you had to change your /etc/modules file, you need to run the following:
code_language.shell:
update-initramfs -u -k all


Reboot proxmox
If you updated the kernel or changed the /etc/modules file you need to perform a reboot.


Download and use the GPU ROM file
The following example is for my PowerColor RX 9070 XT Reaper. Head over to TechPowerUp (for XT) or TechPowerUp (for non-XT) and copy the download link of your GPU models ROM file.

SSH into your proxmox and cd into the kvm folder:
code_language.shell:
cd /usr/share/kvm

Download the copied ROM file of your GPU model into the folder:
code_language.shell:
# this would be for my PowerColor RX 9070 XT Reaper model
# wget https://www.techpowerup.com/vgabios/274342/Powercolor.RX9070XT.16384.241204_1.rom
wget REPLACE_YOUR_COPIED_DOWNLOAD_URL_HERE

Rename the downloaded file and name it 9070xt.rom or 9070.rom.
code_language.shell:
#example for my PowerColor RX 9070 XT Reaper model
mv Powercolor.RX9070XT.16384.241204_1.rom 9070xt.rom

Edit your vm .conf file (located in /etc/pve/qemu-server/). The name of the .conf file is the ID of your VM. Mine is for example 200.
code_language.shell:
nano /etc/pve/qemu-server/200.conf

Add your ROM file to the configuration. Since mine is a 9070 XT I will use the renamed .rom filename "9070xt.rom".
Locate your hostpci entry of your GPU and add "pcie=1,x-vga=1,romfile=9070xt.rom". A complete example of this line for my configuration with the GPU being pci address 0000:03:00 is below:
code_language.shell:
hostpci1: 0000:03:00,pcie=1,x-vga=1,romfile=9070xt.rom


Create / Edit and add your hookup script
I created a hookup script for the GPU use and have the following script located at /var/lib/vz/snippets: (I called the script vmGPU.sh)

code_language.shell:
phase="$2"

if [ "$phase" == "post-stop" ]; then
    # Unbind gpu from vfio-pci
    sleep 5
    echo "0000:03:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind 2>/dev/null
    sleep 2

    # Bind amdgpu
    echo "0000:03:00.0" > /sys/bus/pci/drivers/amdgpu/bind 2>/dev/null
    sleep 2
fi

Now we need to add it to our VM configuration. As described in the previous step head to your .conf file of your VM (mine is: /etc/pve/qemu-server/200.conf) and edit it to add the following line:
code_language.shell:
hookscript: local:snippets/vmGPU.sh

I hope this helps someone else to get the 9070 (XT / non-XT) working. THANKS for the previous comments on this thread otherwise I would'nt have made it to work!

This worked perfectly with my Sapphire Nitro+ 9070XT. I had to edit /etc/default/grub so the framebuffer is loaded. My Grub file now looks like:
Code:
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
# Old line, commented out
# GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt  nofb nomodeset initcall_blacklist=sysfb_init video=vesa:off video=vesafb:off video=efifb:off video=simplefb:off"
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"
GRUB_CMDLINE_LINUX=""

I also tweaked the script you posted to also support resize bar.

code_language.shell:
#!/bin/bash
phase="$2"
echo "Phase is $phase"
if [ "$phase" == "pre-start" ]; then
    # Unbind gpu from amdgpu
    echo "0000:0f:00.0" > /sys/bus/pci/drivers/amdgpu/unbind 2>/dev/null
    sleep 2
    # Resize the GPU's BAR2 memory region (useful for PCI passthrough)
    echo 3 > /sys/bus/pci/devices/0000:0f:00.0/resource2_resize
    sleep 2
elif [ "$phase" == "post-stop" ]; then
    # Unbind gpu from vfio-pci
    sleep 5
    echo "0000:0f:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind 2>/dev/null
    sleep 2
    # Bind amdgpu
    echo "0000:0f:00.0" > /sys/bus/pci/drivers/amdgpu/bind 2>/dev/null
    sleep 2
fi

Many, many thanks friend!
 
Today I had a problem (after figuring how to get the rocm stuff on linux to work) that the GPU would be in an unusable state if the GPU driver inside the VM does not work properly.

For example after shutting down the linux VM without loaded amdgpu drivers inside the VM, the GPU would get unusable on the host. Only a restart of proxmox helped.

Because I did not restart proxmox at first, I tried to launch my Windows VM and it could not use the GPU and failed to start. I rebooted proxmox and tried again to start my Windows VM without success.

The reason it failed and left my GPU unusable: Windows booted into startup recovery which forced a reset of the VM at first.
Which was the reason the GPU was useless since it kind of "reset" without drivers loaded.

Just wanted to let someone know that you have to make sure that the VM correctly boots by just removing the GPU from the VM config and using the Proxmox Default Display until for example Windows successfully boots again.

After that, everything was working again.
 
@Tharanor could you please share the configuration of your VM (which monitor, etc..)? I managed to get it working once but the GPU is not seen after rebooting the VM.


Can you give this script a go? There are certain edge cases where the VM may shutdown (e.g. manually shutting the VM via a command within the VM) which causes the hook not to run. When the VM starts, the script belo, will always unbind the vfio kernel module and bind the amdgpu module.

Code:
#!/bin/bash

phase="$2"
if [ "$phase" == "pre-start" ]; then
     echo "vmGPU: Running pre-start hook"
     # Unbind gpu from vfio-pci
     sleep 5
     echo "0000:0f:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind 2>/dev/null
     sleep 2
     echo "vmGPU: unbound vfio kernel module"

     # Bind amdgpu
     echo "0000:0f:00.0" > /sys/bus/pci/drivers/amdgpu/bind 2>/dev/null
     sleep 2

     echo "vmGPU: bound amdgpu kernel module"
elif [ "$phase" == "post-stop" ]; then
    echo "vmGPU: Running post-stop hook"
    # Unbind gpu from vfio-pci
    sleep 5
    echo "0000:0f:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind 2>/dev/null
    sleep 2
    echo "vmGPU: unbound vfio kernel module"

    # Bind amdgpu
    echo "0000:0f:00.0" > /sys/bus/pci/drivers/amdgpu/bind 2>/dev/null
    sleep 2
    echo "vmGPU: bound amdgpu kernel module"
fi
 
Last edited:
  • Like
Reactions: TomiWebPro
I had some trouble to get it to work with my configuration. The solution I found (after a lot of restarts and trial-and-error) to be reliable is to unbind the gpu from vfio-pci and bind it to amd after the VM (with GPU passed-through) stopped.

If I run that before the VM starts, the GPU gets stuck and only a system reboot would help.

What I noticed as well is that I don't have problems using 256MB for BAR2. Previously it seems to only work with Windows if it is only 8MB, but it is not the case anymore (at least for me).



Everything is working so far and I successfully tested my config with multiple reboots of the Windows VM.

It might be that I had trouble because of my iGPU (Intel) being available to proxmox at first, but as soon as my media VM starts it gets passed-through and proxmox only has the 9070 XT amdgpu left to use.

Here are my steps which results into a reliable 9070 XT passthrough:

Update Kernel
I updated my kernel to 6.11 (latest kernel version for proxmox at the moment). In order to do that go to the linked post and follow the steps: Opt-in Linux 6.11 Kernel for Proxmox VE 8 available on test & no-subscription


Edit /etc/modules file
Since I had an AMD GPU before this upgrade (old VEGA 64) I removed kernel modules which were not needed anymore. The only kernel modules I have now are the following in /etc/modules

code_language.shell:
# /etc/modules: kernel modules to load at boot time.#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# Parameters can be specified after the module name.

vfio
vfio_iommu_type1
vfio_pci

If you had to change your /etc/modules file, you need to run the following:
code_language.shell:
update-initramfs -u -k all


Reboot proxmox
If you updated the kernel or changed the /etc/modules file you need to perform a reboot.


Download and use the GPU ROM file
The following example is for my PowerColor RX 9070 XT Reaper. Head over to TechPowerUp (for XT) or TechPowerUp (for non-XT) and copy the download link of your GPU models ROM file.

SSH into your proxmox and cd into the kvm folder:
code_language.shell:
cd /usr/share/kvm

Download the copied ROM file of your GPU model into the folder:
code_language.shell:
# this would be for my PowerColor RX 9070 XT Reaper model
# wget https://www.techpowerup.com/vgabios/274342/Powercolor.RX9070XT.16384.241204_1.rom
wget REPLACE_YOUR_COPIED_DOWNLOAD_URL_HERE

Rename the downloaded file and name it 9070xt.rom or 9070.rom.
code_language.shell:
#example for my PowerColor RX 9070 XT Reaper model
mv Powercolor.RX9070XT.16384.241204_1.rom 9070xt.rom

Edit your vm .conf file (located in /etc/pve/qemu-server/). The name of the .conf file is the ID of your VM. Mine is for example 200.
code_language.shell:
nano /etc/pve/qemu-server/200.conf

Add your ROM file to the configuration. Since mine is a 9070 XT I will use the renamed .rom filename "9070xt.rom".
Locate your hostpci entry of your GPU and add "pcie=1,x-vga=1,romfile=9070xt.rom". A complete example of this line for my configuration with the GPU being pci address 0000:03:00 is below:
code_language.shell:
hostpci1: 0000:03:00,pcie=1,x-vga=1,romfile=9070xt.rom


Create / Edit and add your hookup script
I created a hookup script for the GPU use and have the following script located at /var/lib/vz/snippets: (I called the script vmGPU.sh)

code_language.shell:
phase="$2"

if [ "$phase" == "post-stop" ]; then
    # Unbind gpu from vfio-pci
    sleep 5
    echo "0000:03:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind 2>/dev/null
    sleep 2

    # Bind amdgpu
    echo "0000:03:00.0" > /sys/bus/pci/drivers/amdgpu/bind 2>/dev/null
    sleep 2
fi

Now we need to add it to our VM configuration. As described in the previous step head to your .conf file of your VM (mine is: /etc/pve/qemu-server/200.conf) and edit it to add the following line:
code_language.shell:
hookscript: local:snippets/vmGPU.sh

I hope this helps someone else to get the 9070 (XT / non-XT) working. THANKS for the previous comments on this thread otherwise I would'nt have made it to work!
Hi Tharanor, thank you for post such a deatailed guide, I am wondering what is the name from the lspci for your 9070xt, because all I can see from mine is
Code:
root@tomiwebpro:~# lspci -nn | grep -i amd
02:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 XL Upstream Port of PCI Express Switch [1002:1478] (rev 24)
03:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 XL Downstream Port of PCI Express Switch [1002:1479] (rev 24)
only two pcie switches, which I think is not the vga device, how to resolve this? Many thanks!