[SOLVED] How to exclude network card from PCI Passthrough?

majorgear

Member
Mar 3, 2022
20
1
8
34
I installed an Intel X520 10Gb network card into my proxmox server with the intention of making it a bridged interface for hosts to share. My PVE is version 7.1-7.
However, it doesn't appear in the "system->network" section of my pve node. However, if I go to a VM's "hardware->add pci device" menu, it appears in the list of devices, with IOMMU enabled.

GRUB CMD LINE
GRUB_CMDLINE LINUX DEFAULT="quiet intel iommu=on i915.enable qvt=1"
Add PCI device menu Screenshot
lspci output screenshot
lspci output
root@pve02:/etc# lspci -k | sed -n '/Ethernet/ ,/Kernel/p'
DeviceName: Onboard - Ethernet
Subsystem: Intel Corporation Wireless-AC 9560 [Jefferson Peak]
Kernel driver in use: iwlwifi
00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (7) I219-V (rev 10)
DeviceName: Onboard - Ethernet
Subsystem: Micro-Star International Co., Ltd. [MSI] Ethernet Connection (7) I219-V
Kernel driver in use: e1000e
01:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
Subsystem: Intel Corporation Ethernet Server Adapter X520-1
Kernel driver in use: vfio-pci

My guess is that I need to exclude the device from IOMMU/vfio since it's using the "Kernel driver in use: vfio-pci"driver , but I don't know how to do that.
On another node where I have a GPU, I do have some files in /etc/modprobe.d/ folder that, with the right entries, may do what I want. I copied them to my "intel x520" node and renamed them to ".bak" files to prevent the system from using them.

root@pve02:/etc# ls -l modprobe.d/
total 20
-rw-r--r-- 1 root root 221 Mar 5 23:06 blacklist.conf.bak
-rw-r--r-- 1 root root 51 Mar 5 22:12 iommu_unsafe_interrupts.conf.bak
-rw-r--r-- 1 root root 26 Mar 5 22:20 kvm.conf.bak
-rw-r--r-- 1 root root 171 Nov 24 10:32 pve-blacklist.conf.bak
-rw-r--r-- 1 root root 31 Mar 5 22:19 vfio.conf.bak

From dmesg I seet the ixgbe driver and the device name, so at some point the Intel card is set up as not-passthroug with the correct driver.


Code:
root@pve02:~# dmesg | grep -i ixgbe
[    3.374658] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver [    3.374659] ixgbe: Copyright (c) 1999-2016 Intel Corporation.
[    3.551519] ixgbe 0000:01:00.0: Multiqueue Enabled: Rx Queue count = 16, Tx Queue count = 16 XDP Queue count = 0
[    3.551810] ixgbe 0000:01:00.0: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link)
[    3.552133] ixgbe 0000:01:00.0: MAC: 2, PHY: 14, SFP+: 3, PBA No: G30771-004
[    3.552134] ixgbe 0000:01:00.0: a0:36:9e:25:8a:c7
[    3.553023] ixgbe 0000:01:00.0: Intel(R) 10 Gigabit Network Connection
 [    3.553131] libphy: ixgbe-mdio: probed
[    3.613653] ixgbe 0000:01:00.0 enp1s0: renamed from eth0
[   17.807306] ixgbe 0000:01:00.0: complete


Any guidance would be appreciated. I remember a week ago I was fighting to get some devices included in IOMMU, and now I'm trying to do the opposite!
 
Last edited:
A PCI(e) device can have the vfio-pci driver because you used it (temporarily) for passthrough to a VM and have not rebooted the Proxmox host yet. (EDIT: see post #4.)
Or the device is explicitly assigned to the vfio-pci driver on the kernel command line (check cat /proc/cmdline for vfio-pci.ids=..., caused by /etc/defaul/grub or /etc/kernel/cmdline). Note that vfio-pci can also be written as vfio_pci. Changes to the kernel parameters have no effect until you run update-grub or proxmox-boot-tool refresh and reboot the Proxmox host.
Or the device is explicitly assigned to the vfio-pci driver in a file that ends in .conf in the /etc/modprobe.d/ directory (check for options vfio-pci ids=...). It looks like you renamed (or removed) all files in /etc/modprobe.d/ but changes to files in this directory have no effect until you run update-initramfs -u and reboot the Proxmox host. And I would advise not to rename/remove the pve-blacklist.conf file as it is part of the Proxmox installation.
All three possibilities can be happening at the same time, because your need to run those commands before rebooting.
 
Last edited:
  • Like
Reactions: majorgear
The shared IOMMU groups was the issue. I didn't notice that there were multiple devices in the same group until later. What was happening was when the VM with a PCI Passthrough card accessed the HBA card, it also took control of the Intel NIC, which is why the host lost access to it.

I added an acs parameter "pcie_acs_override=downstream,multifunction" to the grub command line, updated grub, rebooted, and it was fixed.

@leesteken Thanks for the info about initramfs. I've been running it only after update-grub. I didn't even know why I was doing it, it happened to be in some instructions that I came across. Now I understand when I need to run it!

The issue is solved. I'll see if I can mark the thread "solved"
 
  • Like
Reactions: leesteken
Ah right, good find. So the first option should have been: A PCI(e) device can have the vfio-pci driver because you it is part of a IOMMO group used (temporarily) for passthrough to a VM (and not rebooted yet).
Please note that pcie_acs_override breaks the group security isolation. In principle, the network PCI(e) device can read all of the host memory (including all other VMs) at any time and communicate it unnoticed to the device(s) passed through to the VM via the PCI(e) bus. Best not to run untrusted software or alllow untrusted users on that VM.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!