[TUTORIAL] PCI/GPU Passthrough on Proxmox VE 8 : Installation and configuration

asded

Member
Sep 2, 2022
24
52
18
France
asded.fr
In this article, I propose taking a closer look at the configuration process for setting up PCI Passthrough on Proxmox VE 8.0 (I had initially planned this article for Proxmox VE 7, but since the new version has just been released, it's an opportunity to test!). This article will be the beginning of a series where I'll go into more detail on how to configure different types of VMs (Linux, Windows, macOS and BSD).

I'd also like to thank leesteken for his valuable recommendations and corrections to the first version of this post.

  • PCI/GPU Passthrough on Proxmox VE 8: Windows 10 & 11 (Coming soon...)
In addition to the installation and configuration of both versions, I plan on
utilizing the Sysprep utility to streamline the deployment of templates based
on Windows.


  • PCI/GPU Passthrough on Proxmox VE 8: Debian 12 (Coming soon...)
I intend to commence with an installation via Cloud-init, also to facilitate
the deployment of templates.


  • PCI/GPU Passthrough on Proxmox VE 8: OpenBSD 7.3 (Coming soon...)
I am not yet certain about the approach I will take. I must admit that I am not
sufficiently familiar with BSD systems to be unequivocal about my chances of
success, but it appears that my RX 580 is compatible.


  • PCI/GPU Passthrough on Proxmox VE 8: macOS (Coming soon...)
The version is yet to be determined.

Prerequisites & Context


In my case, I will use an old ATX format "Gaming" PC as the base and install Proxmox on it. Firstly, because I don't have anything else decent on hand, and secondly because all the VMs will be limited to "desktop" use.

My configuration:
  • MOTHERBOARD: ASRock 970 Pro3 R2.0
  • CPU: AMD FX 4300 Quad-Core Processor
  • GPU: SAPPHIRE Pulse Radeon RX 580 8GB GDDR5
If you want to dedicate a PC for virtualization with Proxmox, make sure you have a compatible system (CPU/MOTHERBOARD):
  • The processor must support virtualization extensions VT-x/VT-d for Intel processors, and AMD-V/AMD SVM for AMD processors, in order for IOMMU support to be effective. The same applies to motherboards.
  • You will then need to enable these virtualization extensions from your BIOS or UEFI interface. You can refer to this page, which I find quite concise

In my case, after accessing the BIOS, I modified these two values:

Advanced/CPU_Configuration/Secure Virtual Machine [Enabled] Advanced/North_Bridge/IOMMU [Enabled]

Adding new variables to the boot file


Depending on your system configuration, which bootloader is used by Proxmox, two configurations are applicable: one for GRUB and the other for systemd-boot. To determine which one is used, run the following command from Proxmox:

Bash:
efibootmgr -v
  • If the command returns a message indicating that EFI variables are not supported, GRUB is used in BIOS/Legacy mode.
  • If the output contains a line that looks like the following, GRUB is used in UEFI mode.
Boot0005* proxmox [...] File(EFI\proxmox\grubx64.efi)
  • If the output contains a line similar to the following, systemd-boot is used.
Boot0006 * Linux Boot Manager [...] File(EFI\systemd\systemd-bootx64.efi)

Activating IOMMU for GRUB


If GRUB is your bootloader, whether in BIOS/Legacy or UEFI mode, for an AMD CPU, add the following arguments to your boot file:

/etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet iommu=pt"

For an Intel processor, use the following configuration:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"

Then refresh GRUB with:
Bash:
update-grub

Finally, reboot the system.

However, depending on your configuration, it may be necessary to customize your boot command if Passthrough fails.

In my case, I added the following arguments:

GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt nomodeset pcie_acs_override=downstream initcall_blacklist=sysfb_init"

These additional commands consolidate the method of dividing PCI devices into their own IOMMU group, by enabling ACS Override, disabling the loading of graphics drivers, and preventing framebuffer initialization at kernel startup.

Concerning pcie_acs_override, this should be considered as a last resort option for having distinct IOMMU groups, but is not without risks. Enabling it means that the virtual machine will be able to read all the memory of the Proxmox host (and, incidentally, that of other virtual machines), so use it at your own risk.
For more information: Verify_IOMMU_isolation

The initcall_blacklist=sysfb_init argument replaces the video=efifb:off and video=simplefb:off arguments, since initcall_blacklist=sysfb_init gives better results since Proxmox version 7.2.
For more information: GPU passthrough issues after upgrade to 7-2.

IOMMU activation for systemd-boot


To enable IOMMU activation for systemd-boot, add the following arguments to your boot file located at /etc/kernel/cmdline:

For ZFS root and AMD processor:
root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet iommu=pt

For ZFS root and Intel processor:
root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet intel_iommu=on iommu=pt

Subsequently, refresh systemd-boot using the command:
Bash:
pve-efiboot-tool refresh

Reboot the system and verify that IOMMU is indeed enabled by executing:
Bash:
dmesg | grep -e IOMMU

Upon successful execution, the expected outcome would be:
[ 0.000000] Warning: PCIe ACS overrides enabled; This may allow non-IOMMU protected peer-to-peer DMA


VFIO modules and verification of remapping support


Next, we need to incorporate several VFIO modules into our Proxmox system. Add the following lines to /etc/modules:
Bash:
echo "vfio" >> /etc/modules
echo "vfio_iommu_type1" >> /etc/modules
echo "vfio_pci" >> /etc/modules

In previous versions of Proxmox, the "vfio_virqfd" module should also have been added, but is no longer available in PVE 8.

Subsequently, update the initramfs images and restart Proxmox:
Bash:
update-initramfs -u -k all
systemctl reboot

After the system reboots, you can inspect the status of the VFIO modules by running:
Bash:
dmesg | grep -i vfio

The output should resemble the following:
[ 7.262027] VFIO - User Level meta-driver version: 0.3

Verification of whether your system supports interrupt remapping:
Bash:
dmesg | grep 'remapping'

If the command returns: AMD-Vi: Interrupt remapping enabled" or "DMAR-IR: Enabled IRQ remapping in x2apic mode, then remapping is supported. Otherwise,
you can enable insecure interrupts with:
Bash:
echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/iommu_unsafe_interrupts.conf


Adding stability fixes and optimization for NVIDIA and AMD cards


Nvidia Card

Some Windows applications like Geforce Experience and Passmark Performance Test can crash your virtual machine. To remedy this, add the following:

Bash:
echo "options kvm ignore_msrs=1 report_ignored_msrs=0" > /etc/modprobe.d/kvm.conf

AMD Cards - Fixing the "Reset Bug"

This is a well-known bug in certain AMD cards. The problem occurs when a virtual machine uses the dedicated graphics card via GPU passthrough, but when
it is stopped or restarted, the graphics card does not reset properly. This can result in the hypervisor's inability to reallocate the GPU, and in other cases, it can cause the host system to crash. In this case, the solution would be to simply restart the host.

However, to prevent such issues, which I admit can be troublesome, we can deploy this additional kernel module, https://github.com/gnif/vendor-reset, which will attempt to correct this reallocation problem.

Bash:
apt install pve-headers-$(uname -r)
apt install git dkms build-essential
git clone https://github.com/gnif/vendor-reset.git
cd vendor-reset
dkms install .
echo "vendor-reset" >> /etc/modules
update-initramfs -u
shutdown -r now

After loading this module (you can check this with dmesg | grep vendor_reset). Now let's edit a service to make sure that the re-initialization will take place on our graphics card.

I start by retrieving the PCI ID of my GPU with :
Bash:
lspci -nn | grep 'AMD'

I get the following result (in my case, the ID corresponding to my graphics card is 01:00.0):
01:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] [1002:67df] (rev e7) 01:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere HDMI Audio [Radeon RX 470/480 / 570/580/590] [1002:aaf0]

We are now creating our service:
Bash:
cat << EOF >>  /etc/systemd/system/vreset.service
[Unit]
Description=AMD GPU reset method to 'device_specific'
After=multi-user.target
[Service]
ExecStart=/usr/bin/bash -c 'echo device_specific > /sys/bus/pci/devices/0000:01:00.0/reset_method'
[Install]
WantedBy=multi-user.target
EOF
systemctl enable vreset.service && systemctl start vreset.service

Of course, don't forget to adapt the service in focus of your GPU's ID. Now when you start a VM, you’ll see messages like this appear in your dmesg output:

[57709.971750] vfio-pci 0000:01:00.0: AMD_POLARIS10: version 1.1 [57709.971755] vfio-pci 0000:01:00.0: AMD_POLARIS10: performing pre-reset [57709.971881] vfio-pci 0000:01:00.0: AMD_POLARIS10: performing reset [57709.971885] vfio-pci 0000:01:00.0: AMD_POLARIS10: CLOCK_CNTL: 0x0, PC: 0x2055c [57709.971889] vfio-pci 0000:01:00.0: AMD_POLARIS10: Performing BACO reset [57710.147491] vfio-pci 0000:01:00.0: AMD_POLARIS10: performing post-reset [57710.171814] vfio-pci 0000:01:00.0: AMD_POLARIS10: reset result = 0

For more information: Working around the AMD GPU Reset bug on Proxmox using vendor-reset

GPU isolation and GPU drivers


As we have just seen, to recover the PCI ID from our graphics card, you just need to use the following commands:

Bash:
lspci -nn | grep 'AMD'

or, for Nvidia,
Bash:
lspci -nn | grep 'NVIDIA'

As we now know, in my case it is IDs 01: 00.0[ICODE] and 01: 00.1[/ICODE] that interest me, corresponding to my AMD RX 580 graphics card, or more precisely Vendor ID and Device ID 1002: 67df & 1002: aaf0 :

01:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] [1002:67df] (rev e7) 01:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere HDMI Audio [Radeon RX 470/480 / 570/580/590] [1002:aaf0]

At this point, you can choose between two possible configurations:
  • Either you decide not to create a driver blacklist. But you must ensure that the vfio-pci module is loaded first, using softdep.
  • Or you simply decide to blacklist all drivers globally.
First method : “Modules order”
We'll create a configuration file to specify the PCI IDs to be isolated 1002:67df and 1002:aaf0. But also define a loading order for modules.

Bash:
echo "options vfio-pci ids=1002:67df,1002:aaf0" >> /etc/modprobe.d/vfio.conf
# For AMD
echo "softdep radeon pre: vfio-pci" >> /etc/modprobe.d/vfio.conf
echo "softdep amdgpu pre: vfio-pci" >> /etc/modprobe.d/vfio.conf
# For Nvidia
echo "softdep nouveau pre: vfio-pci" >> /etc/modprobe.d/vfio.conf
echo "softdep nvidia pre: vfio-pci" >> /etc/modprobe.d/vfio.conf
echo "softdep nvidiafb pre: vfio-pci" >> /etc/modprobe.d/vfio.conf
echo "softdep nvidia_drm pre: vfio-pci" >> /etc/modprobe.d/vfio.conf
echo "softdep drm pre: vfio-pci" >> /etc/modprobe.d/vfio.conf
# For Intel
echo "softdep snd_hda_intel pre: vfio-pci" >> /etc/modprobe.d/vfio.conf
echo "softdep snd_hda_codec_hdmi pre: vfio-pci" >> /etc/modprobe.d/vfio.conf
echo "softdep i915 pre: vfio-pci" >> /etc/modprobe.d/vfio.conf

Then reboot the system.

Second method: "Blacklisting drivers"
As before, we will create a configuration file to specify the PCI IDs to be isolated, 1002:67df and 1002:aaf0.

Bash:
echo "options vfio-pci ids=1002:67df,1002:aaf0" > /etc/modprobe.d/vfio.conf

Now, let's make sure to blacklist the drivers corresponding to our graphics card type to avoid any conflicts with the Proxmox host.

Bash:
# AMD drivers
echo "blacklist radeon" >> /etc/modprobe.d/blacklist.conf
echo "blacklist amdgpu" >> /etc/modprobe.d/blacklist.conf
# NVIDIA drivers
echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nvidia" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nvidiafb" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nvidia_drm" >> /etc/modprobe.d/blacklist.conf
# Intel drivers
echo "snd_hda_intel" >> /etc/modprobe.d/blacklist.conf
echo "snd_hda_codec_hdmi" >> /etc/modprobe.d/blacklist.conf
echo "i915" >> /etc/modprobe.d/blacklist.conf

Then, reboot the system.

Testing and final verification


As you have observed, there is no single method to perform a PCI Passthrough to your GPU, given the diversity of each environment. Therefore, it is not unlikely that your configuration may not work on the first try. Keeping in mind the specificities that I have listed for you and persevering. To ensure a better chance of success, here are some useful commands to perform comprehensive debugging as you progress.

IOMMU:

Bash:
dmesg | grep -E "DMAR|IOMMU"

output AMD >>
[ 0.000000] Warning: PCIe ACS overrides enabled; This may allow non-IOMMU protected peer-to-peer DMA

output Intel >>
[ 0.110221] DMAR: IOMMU enabled [ 0.951433] DMAR: Intel(R) Virtualization Technology for Directed I/O


Remapping:
Bash:
dmesg | grep 'remapping'

output AMD >>
[ 0.598913] AMD-Vi: Interrupt remapping enabled

output Intel >>
[ 0.190148] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. [ 0.191599] DMAR-IR: Enabled IRQ remapping in x2apic mode


VFIO:
Bash:
dmesg | grep -i vfio

output >>
[ 7.262027] VFIO - User Level meta-driver version: 0.3 [ 7.329352] vfio-pci 0000:01:00.0: vgaarb: deactivate vga console [ 7.329359] vfio-pci 0000:01:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem [ 7.329490] vfio_pci: add [1002:67df[ffffffff:ffffffff]] class 0x000000/00000000 [ 7.376427] vfio_pci: add [1002:aaf0[ffffffff:ffffffff]] class 0x000000/00000000


Correct driver loading:

Bash:
lspci -nnk | grep 'AMD'

output >>

01:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] [1002:67df] (rev e7) Subsystem: Sapphire Technology Limited Radeon RX 570 Pulse 4GB [1da2:e353] Kernel driver in use: vfio-pci Kernel modules: amdgpu 01:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere HDMI Audio [Radeon RX 470/480 / 570/580/590] [1002:aaf0] Subsystem: Sapphire Technology Limited Ellesmere HDMI Audio [Radeon RX 470/480 / 570/580/590] [1da2:aaf0] Kernel driver in use: vfio-pci Kernel modules: snd_hda_intel

Thank, to reading me see you soon

My original article (FR) : https://asded.gitlab.io/post/2023-07-01-pci-passthrough-proxmox-04/
 
Last edited:
Please allow me to correct some little things that almost all guides get wrong, even the official Proxmox manual:
If GRUB is your bootloader, whether in BIOS/Legacy or UEFI mode, for an AMD CPU, add the following arguments to your boot file:

/etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"
amd_iommu=on is actually invalid (and therefore ignored) because it is on by default for AMD systems.
However, depending on your configuration, it may be necessary to customize your boot command if Passthrough fails.

In my case, I added the following arguments:

GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt nomodeset pcie_acs_override=downstream initcall_blacklist=sysfb_init"

These additional commands consolidate the method of dividing PCI devices into their own IOMMU group, by enabling ACS Override, disabling the loading of graphics drivers, and preventing framebuffer initialization at kernel startup.
Please don't advise pcie_acs_override without pointing out that the VM can then read all of the Proxmox host memory (and therefore all other VM).
IOMMU activation for systemd-boot


To enable IOMMU activation for systemd-boot, add the following arguments to your boot file located at /etc/kernel/cmdline:

For ZFS root and AMD processor:
root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet amd_iommu=on
See above, amd_iommu=on is nonsense.
VFIO modules and verification of remapping support


Next, we need to incorporate several VFIO modules into our Proxmox system. Add the following lines to /etc/modules:
Bash:
echo "vfio" >> /etc/modules
echo "vfio_iommu_type1" >> /etc/modules
echo "vfio_pci" >> /etc/modules
echo "vfio_virqfd" >> /etc/modules
vfio_virqfd is no longer a separate module on Proxmox VE 8 (kernel version 6.2) and should not be there.
AMD Cards - Fixing the "Reset Bug"
Note that you also have to set the reset_method for each GPU (that needs vendor-reset) to device_specific . Some information about how to check that vendor-reset is working would be nice.
We will create a configuration file to specify the PCI IDs to isolate, 1002:67df and 1002:aaf0.
Bash:
echo "options vfio-pci ids=1002:67df,1002:aaf0" > /etc/modprobe.d/vfio.conf

Now, let's make sure to blacklist the drivers corresponding to our graphics card type to avoid any conflicts with the Proxmox host.

Bash:
# AMD drivers
echo "blacklist radeon" >> /etc/modprobe.d/blacklist.conf
echo "blacklist amdgpu" >> /etc/modprobe.d/blacklist.conf
# NVIDIA drivers
echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nvidia" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nvidiafb" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nvidia_drm" >> /etc/modprobe.d/blacklist.conf
# Intel drivers
echo "snd_hda_intel" >> /etc/modprobe.d/blacklist.conf
echo "snd_hda_codec_hdmi" >> /etc/modprobe.d/blacklist.conf
echo "i915" >> /etc/modprobe.d/blacklist.conf
If you do early binding to vfio-pci, you don't have to blacklist drivers. Just make sure vfio-pci loads first using a softdep.
For example, when you want to passthrough AMD GPU and sound and USB controller, use this for make sure vfio-pci is loaded first (and can claim the devices):
softdep amdgpu pre: vfio-pci softdep snd_hda_intel pre: vfio-pci softdep xhci_pci pre: vfio-pci

EDIT: I can delete this post if you don't need it anymore. Just let me know. Added a link for softdep documentation and added an example.
 
Last edited:
Please allow me to correct some little things that almost all guides get wrong, even the official Proxmox manual:

amd_iommu=on is actually invalid (and therefore ignored) because it is on by default for AMD systems.

Please don't advise pcie_acs_override without pointing out that the VM can then read all of the Proxmox host memory (and therefore all other VM).

See above, amd_iommu=on is nonsense.

vfio_virqfd is no longer a separate module on Proxmox VE 8 (kernel version 6.2) and should not be there.

Note that you also have to set the reset_method for each GPU (that needs vendor-reset) to device_specific . Some information about how to check that vendor-reset is working would be nice.

If you do early binding to vfio-pci, you don't have to blacklist drivers. Just make sure vfio-pci loads first using a softdep.

EDIT: I can delete this post if you don't need it anymore. Just let me know.
Thank you for your answer, there are indeed things to clarify and correct, I will make the modifications. I agree with all of your remarks, but would have more precision about that "Just make sure vfio-pci loads first using a softdep."
 
Thanks for this pos, I've struggled to get PCI GPU passthrough working reliably, and since I just upgraded my Proxmox to 8 this week I thought I'd give it another try. I have a question though.

My efibootmgr produces this output, and I'm not sure if it maps to grub or systemd:

Code:
root@pve:~# efibootmgr -v

BootCurrent: 0003

Timeout: 1 seconds

BootOrder: 0003,0002

Boot0002* UEFI: Built-in EFI Shell    VenMedia(5023b95c-db26-429b-a648-bd47664c8012)..BO

Boot0003* UEFI OS    HD(2,GPT,b167afdd-fb59-467b-ad65-d79e4361d3ec,0x800,0x200000)/File(\EFI\BOOT\BOOTX64.EFI)..BO

From a thead in 2020, t.lamprecht suggests checking if `/etc/kernel/pve-efiboot-uuids` exists. On my system it does not, which might indicate I'm using GRUB. My `/sys/firmware/efi` folder is populated, which I guess implies I'm booting in UEFI mode? (corroborates the output from efibootmgr)
 
  • Like
Reactions: cryptocharlie
Thanks for this pos, I've struggled to get PCI GPU passthrough working reliably, and since I just upgraded my Proxmox to 8 this week I thought I'd give it another try. I have a question though.
See also the Proxmox manual: 3.12.3. Determine which Bootloader is Used
My efibootmgr produces this output, and I'm not sure if it maps to grub or systemd:

Code:
root@pve:~# efibootmgr -v
BootCurrent: 0003
Timeout: 1 seconds
BootOrder: 0003,0002
Boot0002* UEFI: Built-in EFI Shell    VenMedia(5023b95c-db26-429b-a648-bd47664c8012)..BO
Boot0003* UEFI OS    HD(2,GPT,b167afdd-fb59-467b-ad65-d79e4361d3ec,0x800,0x200000)/File(\EFI\BOOT\BOOTX64.EFI)..BO
It's booting in UEFI mode but I'm not sure GRUB or systemd-boot.
From a thead in 2020, t.lamprecht suggests checking if `/etc/kernel/pve-efiboot-uuids` exists. On my system it does not, which might indicate I'm using GRUB.
That's old and no longer valid. Use proxmox-boot-tool status.
My `/sys/firmware/efi` folder is populated, which I guess implies I'm booting in UEFI mode? (corroborates the output from efibootmgr)
Indeed, UEFI. If your root is on ZFS then it's most likely systemd-boot. Double check with cat /proc/cmdline, which will either match /etc/default/grub (GRUB) or /etc/kernel/cmdline (systemd-boot, and probably mentions boot=zfs).
 
Thanks for the updated info leesteken, unfortunately this doesn't get me any closer. I'm not sure how my installation can differ.

cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-6.2.16-4-pve root=/dev/mapper/pve-root ro quiet initcall_blacklist=sysfb_init

proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace.. E: /etc/kernel/proxmox-boot-uuids does not exist.
 
Last edited:
Thanks for the updated info leesteken, unfortunately this doesn't get me any closer. I'm not sure how my installation can differ.

cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-6.2.16-4-pve root=/dev/mapper/pve-root ro quiet initcall_blacklist=sysfb_init

proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace.. E: /etc/kernel/proxmox-boot-uuids does not exist.
Your Proxmox uses GRUB. If you have an Intel system, you need to add intel_iommu=on (even with kernel version 5.15 or higher) (in the same place as where you added initcall_blacklist=sysfb_init).
 
Your Proxmox uses GRUB. If you have an Intel system, you need to add intel_iommu=on (even with kernel version 5.15 or higher) (in the same place as where you added initcall_blacklist=sysfb_init).
Hi leesteken, I'm using AMD Ryzen 7 5800X. I believe I've seen you mention in other threads that amd_iommu=on is no longer used. Am I mistaken? Should I add it to my BOOT_IMAGe line?
 
Hi leesteken, I'm using AMD Ryzen 7 5800X. I believe I've seen you mention in other threads that amd_iommu=on is no longer used. Am I mistaken? Should I add it to my BOOT_IMAGe line?
amd_iommu=on is invalid and does nothing because it is enabled by default. Make sure to set IOMMU to Enabled (not Auto) and to use a motherboard BIOS version that does not break passthrough. Maybe start a new thread and explain what your actual problem is?
 
Thank you, I didn't mean to hijack this thread. I will follow OP's tutorial with the new details you've helped me uncover and post a new thread if needed.
 
it's a good quick review on the point for your specific system only, not a tutorial.. over github or website that are those usually place/found.
All explanation for pass are in the proxmox manual and people will be able to get the info base on their pc spec.
 
it's a good quick review on the point for your specific system only, not a tutorial.. over github or website that are those usually place/found.
All explanation for pass are in the proxmox manual and people will be able to get the info base on their pc spec.
Yes, indeed, this post remains focused on my configuration. Although I have attempted to consider different scenarios based on potential variations
(particularly regarding the various graphics cards).
Personally, I find the term "tutorial" quite challenging, as they are often understood as guides, but in most cases, they only address a portion of the subject while remaining within the scope of their authors' use cases. This is not necessarily a flaw, but rather what distinguishes them from documentation.

I consider this series on "PCI/GPU Passthrough on Proxmox" as more of a testimonial of my experience rather than an ultimate guide to follow (by the way, I have chosen the default prefix of "tutorial" for these threads, for the sake of clarity regarding my objectives).
 
hehe indeed .. it was too about the ,tutorial term. But you did good recap for the amd gfx card, nice.
 
  • Like
Reactions: asded
Thanks a ton asded!!! Proxmox newbie here and it worked on the first try :);)

Installing Adrenalin drivers as I type this

Wooohooo!
 
  • Like
Reactions: asded
great article, a lot of this doesn't seem to be needed for intel IrisXE pass through.
If you are interested my steps are documented in a gist (they assume the system is not running ZFS)

--edit--
oh i realized why now, doh, i did vGPU not GPU passthrough!
 
Last edited:
  • Like
Reactions: blebo
Thank you for the detailed instructions. My 6800 comes up with 4 device ids.
Code:
0a:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 21 [Radeon RX 6800/6800 XT / 6900 XT] [1002:73bf] (rev c3)
0a:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 21/23 HDMI/DP Audio Controller [1002:ab28]
0a:00.2 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD/ATI] Device [1002:73a6]
0a:00.3 Serial bus controller [0c80]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 21 USB [1002:73a4]

Do I include all of them in /etc/modprobe.d/vfio.conf? Also when adding PCI devices in Windows VM, do I add all four?

What I did so far is including all of them in /etc/modprobe.d/vfio.conf and PCI devices in Windows VM, but I'm having problems to install drivers
 

Attachments

  • Screenshot from 2023-09-03 19-24-59.png
    Screenshot from 2023-09-03 19-24-59.png
    119.2 KB · Views: 47
Last edited:
Hi,
thank you for this very helpful thread, it really gave some more understanding and also i like how "old" settings are described (which i carried on).
However i still fail to pass through my iGPU ("renderD128" pass through would be sufficient):

Code:
00:02.0 VGA compatible controller [0300]: Intel Corporation Alder Lake-N [UHD Graphics] [8086:46d1]
        Subsystem: ASRock Incorporation Alder Lake-N [UHD Graphics] [1849:46d1]
        Kernel driver in use: vfio-pci
        Kernel modules: i915

Code:
dmesg | grep 'remapping'
[    0.150592] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.152255] DMAR-IR: Enabled IRQ remapping in x2apic mode

dmesg | grep -e IOMMU
[    0.000000] Warning: PCIe ACS overrides enabled; This may allow non-IOMMU protected peer-to-peer DMA
[    0.058901] DMAR: IOMMU enabled
[    0.150587] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 1
[    0.538304] pci 0000:00:02.0: DMAR: Skip IOMMU disabling for graphics
[    0.662630] DMAR: IOMMU feature fl1gp_support inconsistent
[    0.662631] DMAR: IOMMU feature pgsel_inv inconsistent
[    0.662633] DMAR: IOMMU feature nwfs inconsistent
[    0.662635] DMAR: IOMMU feature dit inconsistent
[    0.662636] DMAR: IOMMU feature sc_support inconsistent
[    0.662638] DMAR: IOMMU feature dev_iotlb_support inconsistent

dmesg | grep -i vfio
[    9.254065] VFIO - User Level meta-driver version: 0.3
[    9.370513] vfio-pci 0000:00:02.0: vgaarb: deactivate vga console
[    9.370525] vfio-pci 0000:00:02.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem
[    9.370634] vfio_pci: add [8086:46d1[ffffffff:ffffffff]] class 0x000000/00000000
[    9.370767] vfio_pci: add [544d:6178[ffffffff:ffffffff]] class 0x000000/00000000
[   28.625966] vfio-pci 0000:01:00.0: enabling device (0000 -> 0002)
[   28.829359] vfio-pci 0000:00:02.0: vfio_ecap_init: hiding ecap 0x1b@0x100
[   32.446762] vfio-pci 0000:00:02.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0x47ce

My guest (alpine linux) does showup a card "dev/dri/card0" (from "bochs-drm" i guess) but is missing "renderD128".

This is from the guest:

Code:
dmesg | grep 02:00
[    0.377821] pci 0000:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none
[    0.377821] pci 0000:02:00.0: vgaarb: bridge control possible
[    0.377821] pci 0000:02:00.0: vgaarb: setting as boot device (VGA legacy resources not available)
[    0.453103] pci 0000:02:00.0: can't claim BAR 6 [mem 0xfffe0000-0xffffffff pref]: no compatible bridge window
[    0.466476] pci 0000:02:00.0: BAR 6: no space for [mem size 0x00020000 pref]
[    0.466480] pci 0000:02:00.0: BAR 6: failed to assign [mem size 0x00020000 pref]

lspci -nnk | grep VGA
00:01.0 VGA compatible controller [0300]: Device [1234:1111] (rev 02)
        Subsystem: Red Hat, Inc. Device [1af4:1100]
        Kernel driver in use: bochs-drm
02:00.0 VGA compatible controller [0300]: Intel Corporation Device [8086:46d1]
        Subsystem: ASRock Incorporation Device [1849:46d1]

"modeprobe i915" doesn't change anything..
Is there anything i can do to investigate further?

-----------------------

UPDATE: RESOLVED

Turns out the stable alpine kernel is not supporting the device, upgrading to edge working fine!

Code:
dmesg | grep drm
[    0.824842] ACPI: bus type drm_connector registered
[    4.430852] i915 0000:02:00.0: [drm] VT-d active for gfx access
[    4.430904] i915 0000:02:00.0: [drm] Using Transparent Hugepages
[    4.430908] i915 0000:02:00.0: [drm] *ERROR* conflict detected with stolen region: [mem 0x70800000-0x807fffff]
[    4.431761] i915 0000:02:00.0: [drm] Failed to find VBIOS tables (VBT)
[    4.444800] i915 0000:02:00.0: [drm] Finished loading DMC firmware i915/adlp_dmc_ver2_16.bin (v2.16)
[    5.433743] i915 0000:02:00.0: [drm] GuC firmware i915/tgl_guc_70.bin version 70.5.1
[    5.433748] i915 0000:02:00.0: [drm] HuC firmware i915/tgl_huc.bin version 7.9.3
[    5.438707] i915 0000:02:00.0: [drm] HuC authenticated
[    5.439240] i915 0000:02:00.0: [drm] GuC submission enabled
[    5.439241] i915 0000:02:00.0: [drm] GuC SLPC enabled
[    5.439573] i915 0000:02:00.0: [drm] GuC RC: enabled
[    5.441691] [drm] Initialized i915 1.6.0 20201103 for 0000:02:00.0 on minor 0
[    5.446984] fbcon: i915drmfb (fb0) is primary device
[    5.728695] i915 0000:02:00.0: [drm] fb0: i915drmfb frame buffer device
 
Last edited:
Well am happy to say I have gpu, audio, and pci wifi working with MX linux on intel (hp elitebook 840 G6 touchscreen laptop) I had issues with keyboard, keyboard pointer, and Touchpad, and touchsceen. so I just used a usb keybard and usb mouse. I started go through some cd's to boot to see what works. best so-far I am running with MX-AWS (Andvanced Hardware support) I noticed when I was going through different iSO's, I found at boot time I was not seeing the grub menu. I knew it was there so I just pressed the correct key combos for those that needed it for further booting. I tried plugging into the hdmi adapter but it seems that didn't work for displaying the grub menu. Is there a way to set the machine to VGA on boot? thanks.
oh yeah I also had issues with embedded ethernet so I went to an external usb network adapter. This can also be as troublesome as the ethernet because you don't want to lose network connectivity to HV. Was wonder if there was a tutorial on how to put a physical usb to usb console to the host for just such an occasion.
 
Anyone know how to switch a Debian VM from legacy bios to UEFI to complete this passthrough without reinstalling the VM?
 
  • Like
Reactions: Petros

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!