In this article, I propose taking a closer look at the configuration process for setting up PCI Passthrough on Proxmox VE 8.0 (I had initially planned this article for Proxmox VE 7, but since the new version has just been released, it's an opportunity to test!). This article will be the beginning of a series where I'll go into more detail on how to configure different types of VMs (Linux, Windows, macOS and BSD).
I'd also like to thank leesteken for his valuable recommendations and corrections to the first version of this post.
utilizing the Sysprep utility to streamline the deployment of templates based
on Windows.
the deployment of templates.
sufficiently familiar with BSD systems to be unequivocal about my chances of
success, but it appears that my RX 580 is compatible.
Prerequisites & Context
In my case, I will use an old ATX format "Gaming" PC as the base and install Proxmox on it. Firstly, because I don't have anything else decent on hand, and secondly because all the VMs will be limited to "desktop" use.
My configuration:
In my case, after accessing the BIOS, I modified these two values:
Adding new variables to the boot file
Depending on your system configuration, which bootloader is used by Proxmox, two configurations are applicable: one for GRUB and the other for systemd-boot. To determine which one is used, run the following command from Proxmox:
Activating IOMMU for GRUB
If GRUB is your bootloader, whether in BIOS/Legacy or UEFI mode, for an AMD CPU, add the following arguments to your boot file:
/etc/default/grub
For an Intel processor, use the following configuration:
Then refresh GRUB with:
Finally, reboot the system.
However, depending on your configuration, it may be necessary to customize your boot command if Passthrough fails.
In my case, I added the following arguments:
These additional commands consolidate the method of dividing PCI devices into their own IOMMU group, by enabling ACS Override, disabling the loading of graphics drivers, and preventing framebuffer initialization at kernel startup.
Concerning
For more information: Verify_IOMMU_isolation
The
For more information: GPU passthrough issues after upgrade to 7-2.
IOMMU activation for systemd-boot
To enable IOMMU activation for systemd-boot, add the following arguments to your boot file located at /etc/kernel/cmdline:
For ZFS root and AMD processor:
For ZFS root and Intel processor:
Subsequently, refresh systemd-boot using the command:
Reboot the system and verify that IOMMU is indeed enabled by executing:
Upon successful execution, the expected outcome would be:
VFIO modules and verification of remapping support
Next, we need to incorporate several VFIO modules into our Proxmox system. Add the following lines to /etc/modules:
In previous versions of Proxmox, the "vfio_virqfd" module should also have been added, but is no longer available in PVE 8.
Subsequently, update the initramfs images and restart Proxmox:
After the system reboots, you can inspect the status of the VFIO modules by running:
The output should resemble the following:
Verification of whether your system supports interrupt remapping:
If the command returns:
you can enable insecure interrupts with:
Adding stability fixes and optimization for NVIDIA and AMD cards
Nvidia Card
Some Windows applications like Geforce Experience and Passmark Performance Test can crash your virtual machine. To remedy this, add the following:
AMD Cards - Fixing the "Reset Bug"
This is a well-known bug in certain AMD cards. The problem occurs when a virtual machine uses the dedicated graphics card via GPU passthrough, but when
it is stopped or restarted, the graphics card does not reset properly. This can result in the hypervisor's inability to reallocate the GPU, and in other cases, it can cause the host system to crash. In this case, the solution would be to simply restart the host.
However, to prevent such issues, which I admit can be troublesome, we can deploy this additional kernel module, https://github.com/gnif/vendor-reset, which will attempt to correct this reallocation problem.
After loading this module (you can check this with
I start by retrieving the PCI ID of my GPU with :
I get the following result (in my case, the ID corresponding to my graphics card is 01:00.0):
We are now creating our service:
Of course, don't forget to adapt the service in focus of your GPU's ID. Now when you start a VM, you’ll see messages like this appear in your dmesg output:
For more information: Working around the AMD GPU Reset bug on Proxmox using vendor-reset
GPU isolation and GPU drivers
As we have just seen, to recover the PCI ID from our graphics card, you just need to use the following commands:
or, for Nvidia,
As we now know, in my case it is IDs
At this point, you can choose between two possible configurations:
We'll create a configuration file to specify the PCI IDs to be isolated
Then reboot the system.
Second method: "Blacklisting drivers"
As before, we will create a configuration file to specify the PCI IDs to be isolated,
Now, let's make sure to blacklist the drivers corresponding to our graphics card type to avoid any conflicts with the Proxmox host.
Then, reboot the system.
Testing and final verification
As you have observed, there is no single method to perform a PCI Passthrough to your GPU, given the diversity of each environment. Therefore, it is not unlikely that your configuration may not work on the first try. Keeping in mind the specificities that I have listed for you and persevering. To ensure a better chance of success, here are some useful commands to perform comprehensive debugging as you progress.
IOMMU:
output AMD >>
output Intel >>
Remapping:
output AMD >>
output Intel >>
VFIO:
output >>
Correct driver loading:
output >>
Thank, to reading me see you soon
My original article (FR) : https://asded.gitlab.io/post/2023-07-01-pci-passthrough-proxmox-04/
I'd also like to thank leesteken for his valuable recommendations and corrections to the first version of this post.
- PCI/GPU Passthrough on Proxmox VE 8: Windows 10 & 11 (Coming soon...)
utilizing the Sysprep utility to streamline the deployment of templates based
on Windows.
- PCI/GPU Passthrough on Proxmox VE 8: Debian 12 (Coming soon...)
the deployment of templates.
- PCI/GPU Passthrough on Proxmox VE 8: OpenBSD 7.3 (Coming soon...)
sufficiently familiar with BSD systems to be unequivocal about my chances of
success, but it appears that my RX 580 is compatible.
- PCI/GPU Passthrough on Proxmox VE 8: macOS (Coming soon...)
Prerequisites & Context
In my case, I will use an old ATX format "Gaming" PC as the base and install Proxmox on it. Firstly, because I don't have anything else decent on hand, and secondly because all the VMs will be limited to "desktop" use.
My configuration:
- MOTHERBOARD: ASRock 970 Pro3 R2.0
- CPU: AMD FX 4300 Quad-Core Processor
- GPU: SAPPHIRE Pulse Radeon RX 580 8GB GDDR5
- The processor must support virtualization extensions VT-x/VT-d for Intel processors, and AMD-V/AMD SVM for AMD processors, in order for IOMMU support to be effective. The same applies to motherboards.
- You will then need to enable these virtualization extensions from your BIOS or UEFI interface. You can refer to this page, which I find quite concise
In my case, after accessing the BIOS, I modified these two values:
Advanced/CPU_Configuration/Secure Virtual Machine [Enabled]
Advanced/North_Bridge/IOMMU [Enabled]
Adding new variables to the boot file
Depending on your system configuration, which bootloader is used by Proxmox, two configurations are applicable: one for GRUB and the other for systemd-boot. To determine which one is used, run the following command from Proxmox:
Bash:
efibootmgr -v
- If the command returns a message indicating that EFI variables are not supported, GRUB is used in BIOS/Legacy mode.
- If the output contains a line that looks like the following, GRUB is used in UEFI mode.
Boot0005* proxmox [...] File(EFI\proxmox\grubx64.efi)
- If the output contains a line similar to the following, systemd-boot is used.
Boot0006 * Linux Boot Manager [...] File(EFI\systemd\systemd-bootx64.efi)
Activating IOMMU for GRUB
If GRUB is your bootloader, whether in BIOS/Legacy or UEFI mode, for an AMD CPU, add the following arguments to your boot file:
/etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet iommu=pt"
For an Intel processor, use the following configuration:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
Then refresh GRUB with:
Bash:
update-grub
Finally, reboot the system.
However, depending on your configuration, it may be necessary to customize your boot command if Passthrough fails.
In my case, I added the following arguments:
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt nomodeset pcie_acs_override=downstream initcall_blacklist=sysfb_init"
These additional commands consolidate the method of dividing PCI devices into their own IOMMU group, by enabling ACS Override, disabling the loading of graphics drivers, and preventing framebuffer initialization at kernel startup.
Concerning
pcie_acs_override
, this should be considered as a last resort option for having distinct IOMMU groups, but is not without risks. Enabling it means that the virtual machine will be able to read all the memory of the Proxmox host (and, incidentally, that of other virtual machines), so use it at your own risk.For more information: Verify_IOMMU_isolation
The
initcall_blacklist=sysfb_init
argument replaces the video=efifb:off
and video=simplefb:off
arguments, since initcall_blacklist=sysfb_init
gives better results since Proxmox version 7.2.For more information: GPU passthrough issues after upgrade to 7-2.
IOMMU activation for systemd-boot
To enable IOMMU activation for systemd-boot, add the following arguments to your boot file located at /etc/kernel/cmdline:
For ZFS root and AMD processor:
root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet iommu=pt
For ZFS root and Intel processor:
root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet intel_iommu=on iommu=pt
Subsequently, refresh systemd-boot using the command:
Bash:
pve-efiboot-tool refresh
Reboot the system and verify that IOMMU is indeed enabled by executing:
Bash:
dmesg | grep -e IOMMU
Upon successful execution, the expected outcome would be:
[ 0.000000] Warning: PCIe ACS overrides enabled; This may allow non-IOMMU protected peer-to-peer DMA
VFIO modules and verification of remapping support
Next, we need to incorporate several VFIO modules into our Proxmox system. Add the following lines to /etc/modules:
Bash:
echo "vfio" >> /etc/modules
echo "vfio_iommu_type1" >> /etc/modules
echo "vfio_pci" >> /etc/modules
In previous versions of Proxmox, the "vfio_virqfd" module should also have been added, but is no longer available in PVE 8.
Subsequently, update the initramfs images and restart Proxmox:
Bash:
update-initramfs -u -k all
systemctl reboot
After the system reboots, you can inspect the status of the VFIO modules by running:
Bash:
dmesg | grep -i vfio
The output should resemble the following:
[ 7.262027] VFIO - User Level meta-driver version: 0.3
Verification of whether your system supports interrupt remapping:
Bash:
dmesg | grep 'remapping'
If the command returns:
AMD-Vi: Interrupt remapping enabled" or "DMAR-IR: Enabled IRQ remapping in x2apic mode
, then remapping is supported. Otherwise,you can enable insecure interrupts with:
Bash:
echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/iommu_unsafe_interrupts.conf
Adding stability fixes and optimization for NVIDIA and AMD cards
Nvidia Card
Some Windows applications like Geforce Experience and Passmark Performance Test can crash your virtual machine. To remedy this, add the following:
Bash:
echo "options kvm ignore_msrs=1 report_ignored_msrs=0" > /etc/modprobe.d/kvm.conf
AMD Cards - Fixing the "Reset Bug"
This is a well-known bug in certain AMD cards. The problem occurs when a virtual machine uses the dedicated graphics card via GPU passthrough, but when
it is stopped or restarted, the graphics card does not reset properly. This can result in the hypervisor's inability to reallocate the GPU, and in other cases, it can cause the host system to crash. In this case, the solution would be to simply restart the host.
However, to prevent such issues, which I admit can be troublesome, we can deploy this additional kernel module, https://github.com/gnif/vendor-reset, which will attempt to correct this reallocation problem.
Bash:
apt install pve-headers-$(uname -r)
apt install git dkms build-essential
git clone https://github.com/gnif/vendor-reset.git
cd vendor-reset
dkms install .
echo "vendor-reset" >> /etc/modules
update-initramfs -u
shutdown -r now
After loading this module (you can check this with
dmesg | grep vendor_reset
). Now let's edit a service to make sure that the re-initialization will take place on our graphics card.I start by retrieving the PCI ID of my GPU with :
Bash:
lspci -nn | grep 'AMD'
I get the following result (in my case, the ID corresponding to my graphics card is 01:00.0):
01:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] [1002:67df] (rev e7)
01:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere HDMI Audio [Radeon RX 470/480 / 570/580/590] [1002:aaf0]
We are now creating our service:
Bash:
cat << EOF >> /etc/systemd/system/vreset.service
[Unit]
Description=AMD GPU reset method to 'device_specific'
After=multi-user.target
[Service]
ExecStart=/usr/bin/bash -c 'echo device_specific > /sys/bus/pci/devices/0000:01:00.0/reset_method'
[Install]
WantedBy=multi-user.target
EOF
systemctl enable vreset.service && systemctl start vreset.service
Of course, don't forget to adapt the service in focus of your GPU's ID. Now when you start a VM, you’ll see messages like this appear in your dmesg output:
[57709.971750] vfio-pci 0000:01:00.0: AMD_POLARIS10: version 1.1
[57709.971755] vfio-pci 0000:01:00.0: AMD_POLARIS10: performing pre-reset
[57709.971881] vfio-pci 0000:01:00.0: AMD_POLARIS10: performing reset
[57709.971885] vfio-pci 0000:01:00.0: AMD_POLARIS10: CLOCK_CNTL: 0x0, PC: 0x2055c
[57709.971889] vfio-pci 0000:01:00.0: AMD_POLARIS10: Performing BACO reset
[57710.147491] vfio-pci 0000:01:00.0: AMD_POLARIS10: performing post-reset
[57710.171814] vfio-pci 0000:01:00.0: AMD_POLARIS10: reset result = 0
For more information: Working around the AMD GPU Reset bug on Proxmox using vendor-reset
GPU isolation and GPU drivers
As we have just seen, to recover the PCI ID from our graphics card, you just need to use the following commands:
Bash:
lspci -nn | grep 'AMD'
or, for Nvidia,
Bash:
lspci -nn | grep 'NVIDIA'
As we now know, in my case it is IDs
01: 00.0[ICODE] and
01: 00.1[/ICODE] that interest me, corresponding to my AMD RX 580 graphics card, or more precisely Vendor ID and Device ID 1002: 67df
& 1002: aaf0
:01:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] [1002:67df] (rev e7)
01:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere HDMI Audio [Radeon RX 470/480 / 570/580/590] [1002:aaf0]
At this point, you can choose between two possible configurations:
- Either you decide not to create a driver blacklist. But you must ensure that the vfio-pci module is loaded first, using softdep.
- Or you simply decide to blacklist all drivers globally.
We'll create a configuration file to specify the PCI IDs to be isolated
1002:67df
and 1002:aaf0
. But also define a loading order for modules.
Bash:
echo "options vfio-pci ids=1002:67df,1002:aaf0" >> /etc/modprobe.d/vfio.conf
# For AMD
echo "softdep radeon pre: vfio-pci" >> /etc/modprobe.d/vfio.conf
echo "softdep amdgpu pre: vfio-pci" >> /etc/modprobe.d/vfio.conf
# For Nvidia
echo "softdep nouveau pre: vfio-pci" >> /etc/modprobe.d/vfio.conf
echo "softdep nvidia pre: vfio-pci" >> /etc/modprobe.d/vfio.conf
echo "softdep nvidiafb pre: vfio-pci" >> /etc/modprobe.d/vfio.conf
echo "softdep nvidia_drm pre: vfio-pci" >> /etc/modprobe.d/vfio.conf
echo "softdep drm pre: vfio-pci" >> /etc/modprobe.d/vfio.conf
# For Intel
echo "softdep snd_hda_intel pre: vfio-pci" >> /etc/modprobe.d/vfio.conf
echo "softdep snd_hda_codec_hdmi pre: vfio-pci" >> /etc/modprobe.d/vfio.conf
echo "softdep i915 pre: vfio-pci" >> /etc/modprobe.d/vfio.conf
Then reboot the system.
Second method: "Blacklisting drivers"
As before, we will create a configuration file to specify the PCI IDs to be isolated,
1002:67df
and 1002:aaf0
.
Bash:
echo "options vfio-pci ids=1002:67df,1002:aaf0" > /etc/modprobe.d/vfio.conf
Now, let's make sure to blacklist the drivers corresponding to our graphics card type to avoid any conflicts with the Proxmox host.
Bash:
# AMD drivers
echo "blacklist radeon" >> /etc/modprobe.d/blacklist.conf
echo "blacklist amdgpu" >> /etc/modprobe.d/blacklist.conf
# NVIDIA drivers
echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nvidia" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nvidiafb" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nvidia_drm" >> /etc/modprobe.d/blacklist.conf
# Intel drivers
echo "snd_hda_intel" >> /etc/modprobe.d/blacklist.conf
echo "snd_hda_codec_hdmi" >> /etc/modprobe.d/blacklist.conf
echo "i915" >> /etc/modprobe.d/blacklist.conf
Then, reboot the system.
Testing and final verification
As you have observed, there is no single method to perform a PCI Passthrough to your GPU, given the diversity of each environment. Therefore, it is not unlikely that your configuration may not work on the first try. Keeping in mind the specificities that I have listed for you and persevering. To ensure a better chance of success, here are some useful commands to perform comprehensive debugging as you progress.
IOMMU:
Bash:
dmesg | grep -E "DMAR|IOMMU"
output AMD >>
[ 0.000000] Warning: PCIe ACS overrides enabled; This may allow non-IOMMU protected peer-to-peer DMA
output Intel >>
[ 0.110221] DMAR: IOMMU enabled
[ 0.951433] DMAR: Intel(R) Virtualization Technology for Directed I/O
Remapping:
Bash:
dmesg | grep 'remapping'
output AMD >>
[ 0.598913] AMD-Vi: Interrupt remapping enabled
output Intel >>
[ 0.190148] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[ 0.191599] DMAR-IR: Enabled IRQ remapping in x2apic mode
VFIO:
Bash:
dmesg | grep -i vfio
output >>
[ 7.262027] VFIO - User Level meta-driver version: 0.3
[ 7.329352] vfio-pci 0000:01:00.0: vgaarb: deactivate vga console
[ 7.329359] vfio-pci 0000:01:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem
[ 7.329490] vfio_pci: add [1002:67df[ffffffff:ffffffff]] class 0x000000/00000000
[ 7.376427] vfio_pci: add [1002:aaf0[ffffffff:ffffffff]] class 0x000000/00000000
Correct driver loading:
Bash:
lspci -nnk | grep 'AMD'
output >>
01:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] [1002:67df] (rev e7)
Subsystem: Sapphire Technology Limited Radeon RX 570 Pulse 4GB [1da2:e353]
Kernel driver in use: vfio-pci
Kernel modules: amdgpu
01:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere HDMI Audio [Radeon RX 470/480 / 570/580/590] [1002:aaf0]
Subsystem: Sapphire Technology Limited Ellesmere HDMI Audio [Radeon RX 470/480 / 570/580/590] [1da2:aaf0]
Kernel driver in use: vfio-pci
Kernel modules: snd_hda_intel
Thank, to reading me see you soon
My original article (FR) : https://asded.gitlab.io/post/2023-07-01-pci-passthrough-proxmox-04/
Last edited: