[TUTORIAL] PCI/GPU Passthrough on Proxmox VE 8 : Installation and configuration

Hi
allows got this error in vreset.service

× vreset.service - AMD GPU reset method to 'device_specific'
Loaded: loaded (/etc/systemd/system/vreset.service; enabled; preset: enabled)
Active: failed (Result: exit-code) since Mon 2023-10-02 11:06:56 EEST; 2min 54s ago
Duration: 1ms
Process: 3253 ExecStart=/usr/bin/bash -c echo device_specific > /sys/bus/pci/devices/0000:07:00.0/reset_method (code=ex>
Main PID: 3253 (code=exited, status=1/FAILURE)
CPU: 2ms

Oct 02 11:06:56 pve1 systemd[1]: Started vreset.service - AMD GPU reset method to 'device_specific'.
Oct 02 11:06:56 pve1 bash[3253]: /usr/bin/bash: line 1: echo: write error: Invalid argument
Oct 02 11:06:56 pve1 systemd[1]: vreset.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 11:06:56 pve1 systemd[1]: vreset.service: Failed with result 'exit-code'.


root@pve1:~# dmesg | grep vendor_reset
[ 7.620129] vendor_reset: loading out-of-tree module taints kernel.
[ 7.620177] vendor_reset: module verification failed: signature and/or required key missing - tainting kernel
[ 7.698619] vendor_reset_hook: installed
 
Last edited:
Hi
allows got this error in vreset.service

× vreset.service - AMD GPU reset method to 'device_specific'
Loaded: loaded (/etc/systemd/system/vreset.service; enabled; preset: enabled)
Active: failed (Result: exit-code) since Mon 2023-10-02 11:06:56 EEST; 2min 54s ago
Duration: 1ms
Process: 3253 ExecStart=/usr/bin/bash -c echo device_specific > /sys/bus/pci/devices/0000:07:00.0/reset_method (code=ex>
Main PID: 3253 (code=exited, status=1/FAILURE)
CPU: 2ms

Oct 02 11:06:56 pve1 systemd[1]: Started vreset.service - AMD GPU reset method to 'device_specific'.
Oct 02 11:06:56 pve1 bash[3253]: /usr/bin/bash: line 1: echo: write error: Invalid argument
Oct 02 11:06:56 pve1 systemd[1]: vreset.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 11:06:56 pve1 systemd[1]: vreset.service: Failed with result 'exit-code'.


root@pve1:~# dmesg | grep vendor_reset
[ 7.620129] vendor_reset: loading out-of-tree module taints kernel.
[ 7.620177] vendor_reset: module verification failed: signature and/or required key missing - tainting kernel
[ 7.698619] vendor_reset_hook: installed
Hey @abayoumy, I think it's just because of the '' syntax of the bash command. In the service file change the line:
Code:
ExecStart=/usr/bin/bash -c 'echo device_specific > /sys/bus/pci/devices/0000:07:00.0/reset_method'
to
Code:
ExecStart=/usr/bin/bash -c echo device_specific > /sys/bus/pci/devices/0000:07:00.0/reset_method
and re-launch service
Bash:
sudo systemctl daemon-reload && sudo systemctl restart vreset.servcie
 
Last edited:
Hey @abayoumy, I think it's just because of the '' syntax of the bash command. In the service file change the line:
Code:
ExecStart=/usr/bin/bash -c 'echo device_specific > /sys/bus/pci/devices/0000:07:00.0/reset_method'
to
Code:
ExecStart=/usr/bin/bash -c echo device_specific > /sys/bus/pci/devices/0000:07:00.0/reset_method
and re-launch service
Bash:
sudo systemctl daemon-reload && sudo systemctl restart vreset.servcie
thank you for your advice.

as your advice i change the service command , service started now
root@pve1:~# systemctl status vreset.service
○ vreset.service - AMD GPU reset method to 'device_specific'
Loaded: loaded (/etc/systemd/system/vreset.service; enabled; preset: enabled)
Active: inactive (dead) since Mon 2023-10-02 23:01:04 EEST; 40s ago
Duration: 2ms
Process: 2234 ExecStart=/usr/bin/bash -c echo device_specific > /sys/bus/pci/devices/0000:07:00.0/reset_method (code=ex>
Main PID: 2234 (code=exited, status=0/SUCCESS)
CPU: 2ms

Oct 02 23:01:04 pve1 systemd[1]: Started vreset.service - AMD GPU reset method to 'device_specific'.
Oct 02 23:01:04 pve1 systemd[1]: vreset.service: Deactivated successfully.


but vendor_reset model still not laded and i think this is the main issue

root@pve1:~# dmesg | grep vendor_reset
[ 12.103764] vendor_reset: loading out-of-tree module taints kernel.
[ 12.103816] vendor_reset: module verification failed: signature and/or required key missing - tainting kernel
[ 12.146306] vendor_reset_hook: installed
 
Hello!

I followed tutorial and ended up on black screen twice:
2. 419317 amd_gpio AMDI0030:00:Invalid config param 0014

Second black screen had two lines
2. 419317 amd_gpio AMDI0030:00:Invalid config param 0014
3. 292363 mpt2sas_cm0: overriding NVDATA EEDPTagMode setting

Set up:
Gigabyte Auros master x570
Ryzen 3900x
Radeon VII
SLI HBA sas/sata card
Intel x710da2 ethernet card

Using systemd-boot

Heres the info on verifying:
root@homelab:~# dmesg | grep -E "DMAR|IOMMU"
[ 2.430801] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported
[ 2.433430] pci 0000:00:00.2: AMD-Vi: Found IOMMU cap 0x40
[ 2.433814] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).
[ 13.477645] AMD-Vi: AMD IOMMUv2 loaded and initialized

root@homelab:~# dmesg | grep 'remapping'
[ 0.538992] x2apic: IRQ remapping doesn't support X2APIC mode
[ 2.433438] AMD-Vi: Interrupt remapping enabled

root@homelab:~# dmesg | grep -i vfio
[ 13.099882] VFIO - User Level meta-driver version: 0.3
[ 13.106757] vfio-pci 0000:0f:00.0: vgaarb: deactivate vga console
[ 13.106762] vfio-pci 0000:0f:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:eek:wns=none
[ 13.106930] vfio_pci: add [1002:66af[ffffffff:ffffffff]] class 0x000000/00000000
[ 13.203691] vfio_pci: add [1002:ab20[ffffffff:ffffffff]] class 0x000000/00000000

Correct driver loading verify:
root@homelab:~# lspci -nnk | grep 'AMD'
It lists all IOMMU groups not only gpu contoller and its audio device and theres not any kernel info

root@homelab:~# systemctl status vreset.service
Unit vreset.service could not be found.

So im kinda stuck with this at the moment, any help would be really appreciated.
 
Last edited:
It's also wise to switch to cpu graphics instead of gpu graphics in the bios otherwise your display may not work after disabling the GPU drivers. On my asus ROG system I had to switch
"Advanced>System Agent Configuration>Graphics Configuration" to "iGPU".
 
Last edited:
Has anyone tried a Cezanne APU iGPU passthrough, e.g. 5750GE? I successfully past that one through with Proxmox 7.x and did suffer from the reset bug. The vendor-reset does not work for Cezanne and even the Windows host with the patch to mitigate this issue caused the host to freeze in many situations when the VM was shutdown.

I really do hope that virtiogpu-venus will be introduced soon to Proxmox. It's a pitty it does not exist in PVE 8.
 
Hey,

Thank you for your post, it has helped me, even with the Proxmox documentation. It is clearer for me.

One thing I can't find is the blacklisting or softdep of drivers used by other devices.
I want to passthrough an Ethernet PCIe card in a VM, which is using the "e1000e" driver. But my Proxmox host use an Ethernet port of my motherboard with the same driver. If I blacklist or softdep this driver, will it make it stop working for the Ethernet port in my Proxmox host?
 
Thanks for this comprehensive guide. Hoping to get help with passthrough of my iGPU that is part of a J4125 based mini-pc.

When I start the debian VM I just get a black screen and the whole proxmox system freezes up.

I have tried the blacklist method and the softdep with the same result.

Any guidance would be appreciated.

Here is my setup:
Code:
root@pve:~# cat /etc/modprobe.d/vfio.conf
options vfio-pci ids=8086:3185
softdep snd_hda_intel pre: vfio-pci
softdep snd_hda_codec_hdmi pre: vfio-pci
softdep i915 pre: vfio-pci
Code:
root@pve:~# cat /etc/modprobe.d/pve-blacklist.conf
# This file contains a list of modules which are not supported by Proxmox VE


# nvidiafb see bugreport https://bugzilla.proxmox.com/show_bug.cgi?id=701
blacklist nvidiafb

Code:
root@pve:~# dmesg | grep -E "DMAR|IOMMU"
[    0.009382] ACPI: DMAR 0x00000000799CE000 0000A8 (v01 INTEL  GLK-SOC  00000003 BRXT 0100000D)
[    0.009448] ACPI: Reserving DMAR table memory at [mem 0x799ce000-0x799ce0a7]
[    0.056270] DMAR: IOMMU enabled
[    0.176544] DMAR: Host address width 39
[    0.176546] DMAR: DRHD base: 0x000000fed64000 flags: 0x0
[    0.176557] DMAR: dmar0: reg_base_addr fed64000 ver 1:0 cap 1c0000c40660462 ecap 9e2ff0505e
[    0.176561] DMAR: DRHD base: 0x000000fed65000 flags: 0x1
[    0.176571] DMAR: dmar1: reg_base_addr fed65000 ver 1:0 cap d2008c40660462 ecap f050da
[    0.176576] DMAR: RMRR base: 0x00000079934000 end: 0x00000079953fff
[    0.176579] DMAR: RMRR base: 0x0000007b800000 end: 0x0000007fffffff
[    0.176583] DMAR-IR: IOAPIC id 1 under DRHD base  0xfed65000 IOMMU 1
[    0.176586] DMAR-IR: HPET id 0 under DRHD base 0xfed65000
[    0.176588] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.178598] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    0.416898] DMAR: No ATSR found
[    0.416900] DMAR: No SATC found
[    0.416903] DMAR: IOMMU feature fl1gp_support inconsistent
[    0.416905] DMAR: IOMMU feature pgsel_inv inconsistent
[    0.416907] DMAR: IOMMU feature nwfs inconsistent
[    0.416908] DMAR: IOMMU feature eafs inconsistent
[    0.416910] DMAR: IOMMU feature prs inconsistent
[    0.416911] DMAR: IOMMU feature nest inconsistent
[    0.416912] DMAR: IOMMU feature mts inconsistent
[    0.416913] DMAR: IOMMU feature sc_support inconsistent
[    0.416915] DMAR: IOMMU feature dev_iotlb_support inconsistent
[    0.416917] DMAR: dmar0: Using Queued invalidation
[    0.416922] DMAR: dmar1: Using Queued invalidation
[    0.417619] DMAR: Intel(R) Virtualization Technology for Directed I/O
Code:
root@pve:~# dmesg | grep 'remapping'
[    0.176588] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.178598] DMAR-IR: Enabled IRQ remapping in x2apic mode
Code:
root@pve:~# dmesg | grep -i vfio
[   17.121134] VFIO - User Level meta-driver version: 0.3
[   17.137880] vfio-pci 0000:00:02.0: vgaarb: deactivate vga console
[   17.137886] vfio-pci 0000:00:02.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem
[   17.138834] vfio_pci: add [8086:3185[ffffffff:ffffffff]] class 0x000000/00000000
Code:
root@pve:~# lspci -nnk | grep 'VGA'
00:02.0 VGA compatible controller [0300]: Intel Corporation GeminiLake [UHD Graphics 600] [8086:3185] (rev 06)
Code:
root@pve:~# cat /etc/pve/qemu-server/100.conf
balloon: 0
bios: ovmf
boot: order=ide2;virtio0
cores: 4
cpu: x86-64-v2-AES
efidisk0: local-zfs:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
hostpci0: 0000:00:02,pcie=1,x-vga=1
ide2: none,media=cdrom
machine: q35
memory: 8192
meta: creation-qemu=8.1.2,ctime=1705566405
name: mirror
net0: virtio=BC:24:11:91:56:EA,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsihw: virtio-scsi-single
serial0: socket
smbios1: uuid=fb35afd8-75b0-4f60-944d-40fa2f05a3de
sockets: 1
vga: none
virtio0: local-zfs:vm-100-disk-1,iothread=1,size=32G
vmgenid: 40f7ee42-2549-414d-bf82-e21a1915deb8
Code:
root@pve:~# cat /etc/kernel/cmdline
root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet intel_iommu=on iommu=pt rootdelay=10

Code:
root@pve:~# proxmox-boot-tool refresh
Running hook script 'proxmox-auto-removal'..
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
Copying and configuring kernels on /dev/disk/by-uuid/F4C9-D913
    Copying kernel and creating boot-entry for 6.5.11-4-pve
    Copying kernel and creating boot-entry for 6.5.11-7-pve

Code:
root@pve:~# update-initramfs -u -k all
update-initramfs: Generating /boot/initrd.img-6.5.11-7-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
Copying and configuring kernels on /dev/disk/by-uuid/F4C9-D913
    Copying kernel and creating boot-entry for 6.5.11-4-pve
    Copying kernel and creating boot-entry for 6.5.11-7-pve
update-initramfs: Generating /boot/initrd.img-6.5.11-4-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
Copying and configuring kernels on /dev/disk/by-uuid/F4C9-D913
    Copying kernel and creating boot-entry for 6.5.11-4-pve
    Copying kernel and creating boot-entry for 6.5.11-7-pve
Code:
root@pve:~# pveversion
pve-manager/8.1.4/ec5affc9e41f1d79 (running kernel: 6.5.11-7-pve)
 
Last edited:
  • Like
Reactions: treyus30
Hello all,

I have two vms I would like to use passthrough. The first is OPNsense. I have an Intel I350-T4 PCI card in my system and I have SR-IOV running. When adding a PCI device to the OPNsense VM I see nothing listed under Mapped Devices, but I do find a full list under Raw Devices. I would assume I select from the Raw Device list? Second do I select the actual interface or the virtual interface in the list? My second vm is Plex and I want to passthrough my integrated Intel GPU. Again do I just select it from the Raw Device list and away I go?

Sorry if these are newbie questions. I am just getting up to speed on all of this, now that I was able to get SR-IOV running with my Intel card. I am very excited but also very nervous. Its a bit confusing to know what is what.

Thanks,
Steve
 

Attachments

  • Screenshot 2024-04-03 150334.png
    Screenshot 2024-04-03 150334.png
    56.5 KB · Views: 19
Hello,

I followed this guide to passthrough my Intel UHD 770 Graphics (iGPU embedded on Intel I9-14900t cpu) but after several trials I still get Error 43 and exclamation yellow triangle in Windows11 deviced/drivers

my "dmesg | grep -i vfio":

[ 4.434588] VFIO - User Level meta-driver version: 0.3
[ 4.439450] vfio-pci 0000:00:02.0: vgaarb: deactivate vga console
[ 4.439454] vfio-pci 0000:00:02.0: vgaarb: VGA decodes changed: olddecodes=io+mem,decodes=io+mem:eek:wns=io+mem
[ 4.439644] vfio_pci: add [8086:a780[ffffffff:ffffffff]] class 0x000000/00000000
[ 29.401433] vfio-pci 0000:00:02.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0x0000

My "lspci -nnv | grep VGA"
0000:00:02.0 VGA compatible controller [0300]: Intel Corporation Raptor Lake-S GT1 [UHD Graphics 770] [8086:a780] (rev 04) (prog-if 00 [VGA controller])

My "/etc/modprobe.d/vfio.conf"

options vfio-pci ids=8086:a780 disable_vga=1

My bootloader is systemd-boot

Please, anybody can help me??
Appreciated any helps and suggestions

FAB
 
In this article, I propose taking a closer look at the configuration process for setting up PCI Passthrough on Proxmox VE 8.0 (I had initially planned this article for Proxmox VE 7, but since the new version has just been released, it's an opportunity to test!). This article will be the beginning of a series where I'll go into more detail on how to configure different types of VMs (Linux, Windows, macOS and BSD).

I'd also like to thank leesteken for his valuable recommendations and corrections to the first version of this post.

...................................................
Hello,

I followed this (on top) guide to passthrough my Intel UHD 770 Graphics (iGPU embedded on Intel I9-14900t cpu) but after several trials I still get Error 43 and exclamation yellow triangle in Windows11 deviced/drivers

my "dmesg | grep -i vfio":

[ 4.434588] VFIO - User Level meta-driver version: 0.3
[ 4.439450] vfio-pci 0000:00:02.0: vgaarb: deactivate vga console
[ 4.439454] vfio-pci 0000:00:02.0: vgaarb: VGA decodes changed: olddecodes=io+mem,decodes=io+mem:eek:wns=io+mem
[ 4.439644] vfio_pci: add [8086:a780[ffffffff:ffffffff]] class 0x000000/00000000
[ 29.401433] vfio-pci 0000:00:02.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0x0000

My "lspci -nnv | grep VGA"
0000:00:02.0 VGA compatible controller [0300]: Intel Corporation Raptor Lake-S GT1 [UHD Graphics 770] [8086:a780] (rev 04) (prog-if 00 [VGA controller])

My "/etc/modprobe.d/vfio.conf"

options vfio-pci ids=8086:a780 disable_vga=1

My bootloader is systemd-boot

Please, anybody can help me??
Appreciated any helps and suggestions

FAB
 
Hi,

Very clear guide, but does this also work for Intel NIC's? My GPU works fine in my main server, but in my firewall server i cant get to manage to passthrough intel i225-v nics to use for my OPNSense vm.
 
Hi,

Very clear guide, but does this also work for Intel NIC's? My GPU works fine in my main server, but in my firewall server i cant get to manage to passthrough intel i225-v nics to use for my OPNSense vm.

Running into the same problem. What I have seen so far:

1) Switch is configured for 3 port LACP LAGG
2) PVE is connected to the 3 port LAGG
3) When looking at the LAGG status on my switch it does not see the ports as active.

I have attached my network config. I am passing through completely. I am surprised that once PVE is booted the ports are not considered active.
 

Attachments

  • Screenshot 2024-06-25 115709.png
    Screenshot 2024-06-25 115709.png
    58.9 KB · Views: 6
A bit more info...

On my switch my vlans are tagged to my LAGG, as well as the single port I am using for my mgmt vlan. I did a test and changed my OPNsense vm to try to get a DHCP address for its mgmt IP, from my current physical OPNsense firewall. I can see that there is a DHCP discover and DHCP offer, but no DHCP ack. Typically I would see this on a port that is untagged, so I am very confused by all of this.

I am starting to feel like PCI passthrough and SR-IOV is a bunch of whooy...I am very confused and also very frustrated. I have setup my PVE server according to spec but I am not clear about how to allocate my devices to my VMs.
 
Very clear guide, but does this also work for Intel NIC's?
Sure it does.

My machine has an Intel I350-T2 card, reserved for Opnsense, and a regular Intel card for maintenance/Proxmox access:

Code:
root@proxmox ~ $% lspci -nn
[...]
05:00.0 Ethernet controller [0200]: Intel Corporation 82574L Gigabit Network Connection [8086:10d3]
07:00.0 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)
07:00.1 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)

And Proxmox would use the following drivers for the hosting OS:
Code:
05:00.0 0200: 8086:10d3
        Subsystem: 8086:a01f
        Kernel driver in use: e1000e
        Kernel modules: e1000e
07:00.0 0200: 8086:1521 (rev 01)
        Subsystem: 8086:0002
        Kernel driver in use: vfio-pci
        Kernel modules: igb
07:00.1 0200: 8086:1521 (rev 01)
        Subsystem: 8086:0002
        Kernel driver in use: vfio-pci
        Kernel modules: igb

First, I blacklisted the igb driver, and checked with lsmod, if the change works, and also disabled SRV-IO, because the card is only used in a single VM, so there is no need for this at all.
Code:
root@proxmox ~ $% cat /etc/modprobe.d/blacklist.conf
blacklist igb

The next step was to assign PCIe device to Opnsense:
2024-06-26 07 51 19.png

... and then just to run Opnsense, which detects the card.

Within Proxmox, all I can see as NIC is/are:

2024-06-26 07 56 50.png

enp5s0 - the single Intel card, linked to bridge vmbr0, and used to maintain/access Proxmox, plus some virtual ones I created, but not the dual NIC.
 

Attachments

  • 2024-06-26 07 51 19.png
    2024-06-26 07 51 19.png
    114.9 KB · Views: 7
Last edited:
Sure it does.

My machine has an Intel I350-T2 card, reserved for Opnsense, and a regular Intel card for maintenance/Proxmox access:

Code:
root@proxmox ~ $% lspci -nn
[...]
05:00.0 Ethernet controller [0200]: Intel Corporation 82574L Gigabit Network Connection [8086:10d3]
07:00.0 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)
07:00.1 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)

And Proxmox would use the following drivers for the hosting OS:
Code:
05:00.0 0200: 8086:10d3
        Subsystem: 8086:a01f
        Kernel driver in use: e1000e
        Kernel modules: e1000e
07:00.0 0200: 8086:1521 (rev 01)
        Subsystem: 8086:0002
        Kernel driver in use: vfio-pci
        Kernel modules: igb
07:00.1 0200: 8086:1521 (rev 01)
        Subsystem: 8086:0002
        Kernel driver in use: vfio-pci
        Kernel modules: igb

First, I blacklisted the igb driver, and checked with lsmod, if the change works, and also disabled SRV-IO, because the card is only used in a single VM, so there is no need for this at all.
Code:
root@proxmox ~ $% cat /etc/modprobe.d/blacklist.conf
blacklist igb

The next step was to assign PCIe device to Opnsense:
View attachment 70383

... and then just to run Opnsense, which detects the card.

Within Proxmox, all I can see as NIC is/are:

View attachment 70384

enp5s0 - the single Intel card, linked to bridge vmbr0, and used to maintain/access Proxmox, plus some virtual ones I created, but not the dual NIC.
Thanks for all the info. I am curious...how did you get OPNsense to boot with a EFI disk? I could not make this happen. I ended up having to use SeaBIOS. Are you not defining any vfs for your I350? Are you saying you tried blacklisting IGB and then decided not to blacklist it?
 
Last edited:
Create a bootable USB drive for the Opnsense installation, created a VM with q35/UEFI in Proxmox, plus the required EFI partiton. Then launched the installation, waited for the errors (drive not accessible or similar), entered the BIOS, disabled safe boot, and installed Opnsense.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!