PCI passthrough, iommu=on, not working

NCCM

New Member
Jan 4, 2023
6
1
3
I'm at wits' end on this. I installed a 4 port NIC, Dell 0HM9JY, to fiddle with a pfsense instance. I tried enabling passthrough of the card and that's where my troubles began. The machine that this proxmox is running from is a dell optiplex 5060, i7-8700k, bios v 1.23.0. Virt and VT for Direct I/O enabled. Tried with both trusted execution enabled and disabled. Hoping someone can tell me what's wrong.

When I edit the grub to add iommu=on, the proxmox machine will boot up into proxmox, but without the normal dialogue. I get these faults:

Code:
[0.464865] DMARS: [DMA Read NO_PASID] Request device [00:02.0] fault addr 0xf
bf3f000 [fault reason 0x06) PTE Read access is not set
[0.498020] DMARS: DRHD: handling fault status reg 3
[0.498024] DMARS: [DMA Read NO_PASID] Request device [00:02.0] fault addr 0xf
bf3f000 [fault reason 0x06] PTE Read access is not set
[0.531319] DMARS: DRHD: handling fault status reg 3
[0.531323] DMARS: [DMA Read NO_PASID] Request device [00:02.0] fault addr 0xf
bf3f000 [fault reason 0x06) PTE Read access is not set
[0.564624] DMAR: DRHD: handling fault status reg 3
  Found volume group "pre" using metadata type lvm2
  7 logical volume(s) in volume group "pre" now active
/dev/mapper/pve-root: clean, 50038/6291456 files, 4296881/25165824 blocks.

Then goes to the proxmox prompt.

my grub file:

Code:
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
#   info -f grub -n 'Simple configuration'

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
GRUB_CMDLINE_LINUX=""

# Uncomment to enable BadRAM filtering, modify to suit your needs
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)
#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"

# Uncomment to disable graphical terminal (grub-pc only)
#GRUB_TERMINAL=console

# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
#GRUB_GFXMODE=640x480

# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux
#GRUB_DISABLE_LINUX_UUID=true

# Uncomment to disable generation of recovery mode menu entries
#GRUB_DISABLE_RECOVERY="true"

# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"

my cmdline
Code:
BOOT_IMAGE=/boot/vmlinuz-5.15.74-1-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on

dmesg
Code:
[    0.010472] ACPI: DMAR 0x00000000AEDFEE80 0000A8 (v01 INTEL  EDK2     00000002      01000013)
[    0.010499] ACPI: Reserving DMAR table memory at [mem 0xaedfee80-0xaedfef27]
[    0.087681] DMAR: IOMMU enabled
[    0.224004] DMAR: Host address width 39
[    0.224005] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[    0.224009] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e
[    0.224011] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[    0.224014] DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da
[    0.224016] DMAR: RMRR base: 0x000000af8ac000 end: 0x000000afaf5fff
[    0.224017] DMAR: RMRR base: 0x000000bb000000 end: 0x000000bd7fffff
[    0.224019] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 1
[    0.224020] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[    0.224021] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.227156] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    0.458932] DMAR: No ATSR found
[    0.458932] DMAR: No SATC found
[    0.458935] DMAR: IOMMU feature fl1gp_support inconsistent
[    0.458936] DMAR: IOMMU feature pgsel_inv inconsistent
[    0.458937] DMAR: IOMMU feature nwfs inconsistent
[    0.458938] DMAR: IOMMU feature pasid inconsistent
[    0.458938] DMAR: IOMMU feature eafs inconsistent
[    0.458939] DMAR: IOMMU feature prs inconsistent
[    0.458940] DMAR: IOMMU feature nest inconsistent
[    0.458940] DMAR: IOMMU feature mts inconsistent
[    0.458941] DMAR: IOMMU feature sc_support inconsistent
[    0.458941] DMAR: IOMMU feature dev_iotlb_support inconsistent
[    0.458942] DMAR: dmar0: Using Queued invalidation
[    0.458945] DMAR: dmar1: Using Queued invalidation
[    0.460986] DMAR: Intel(R) Virtualization Technology for Directed I/O

IOMMU groups
Code:
IOMMU group 0 00:00.0 Host bridge [0600]: Intel Corporation 8th Gen Core Processor Host Bridge/DRAM Registers [8086:3ec2] (rev 07)
IOMMU group 10 00:1b.0 PCI bridge [0604]: Intel Corporation Cannon Lake PCH PCI Express Root Port #17 [8086:a340] (rev f0)
IOMMU group 11 00:1d.0 PCI bridge [0604]: Intel Corporation Cannon Lake PCH PCI Express Root Port #9 [8086:a330] (rev f0)
IOMMU group 12 00:1f.0 ISA bridge [0601]: Intel Corporation Q370 Chipset LPC/eSPI Controller [8086:a306] (rev 10)
IOMMU group 12 00:1f.3 Audio device [0403]: Intel Corporation Cannon Lake PCH cAVS [8086:a348] (rev 10)
IOMMU group 12 00:1f.4 SMBus [0c05]: Intel Corporation Cannon Lake PCH SMBus Controller [8086:a323] (rev 10)
IOMMU group 12 00:1f.5 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH SPI Controller [8086:a324] (rev 10)
IOMMU group 12 00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (7) I219-V [8086:15bc] (rev 10)
IOMMU group 13 02:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/980PRO [144d:a80a]
IOMMU group 14 03:00.0 PCI bridge [0604]: Microsemi / PMC / IDT PES12N3A 12-lane 3-Port PCI Express Switch [111d:8018] (rev 0e)
IOMMU group 15 04:02.0 PCI bridge [0604]: Microsemi / PMC / IDT PES12N3A 12-lane 3-Port PCI Express Switch [111d:8018] (rev 0e)
IOMMU group 15 05:00.0 Ethernet controller [0200]: Intel Corporation 82576 Gigabit Network Connection [8086:10e8] (rev 01)
IOMMU group 15 05:00.1 Ethernet controller [0200]: Intel Corporation 82576 Gigabit Network Connection [8086:10e8] (rev 01)
IOMMU group 16 04:04.0 PCI bridge [0604]: Microsemi / PMC / IDT PES12N3A 12-lane 3-Port PCI Express Switch [111d:8018] (rev 0e)
IOMMU group 16 06:00.0 Ethernet controller [0200]: Intel Corporation 82576 Gigabit Network Connection [8086:10e8] (rev 01)
IOMMU group 16 06:00.1 Ethernet controller [0200]: Intel Corporation 82576 Gigabit Network Connection [8086:10e8] (rev 01)
IOMMU group 1 00:01.0 PCI bridge [0604]: Intel Corporation 6th-10th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 07)
IOMMU group 1 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP107GL [Quadro P1000] [10de:1cb1] (rev a1)
IOMMU group 1 01:00.1 Audio device [0403]: NVIDIA Corporation GP107GL High Definition Audio Controller [10de:0fb9] (rev a1)
IOMMU group 2 00:02.0 VGA compatible controller [0300]: Intel Corporation CometLake-S GT2 [UHD Graphics 630] [8086:3e92]
IOMMU group 3 00:08.0 System peripheral [0880]: Intel Corporation Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th/8th Gen Core Processor Gaussian Mixture Model [8086:1911]
IOMMU group 4 00:12.0 Signal processing controller [1180]: Intel Corporation Cannon Lake PCH Thermal Controller [8086:a379] (rev 10)
IOMMU group 5 00:14.0 USB controller [0c03]: Intel Corporation Cannon Lake PCH USB 3.1 xHCI Host Controller [8086:a36d] (rev 10)
IOMMU group 5 00:14.2 RAM memory [0500]: Intel Corporation Cannon Lake PCH Shared SRAM [8086:a36f] (rev 10)
IOMMU group 6 00:14.3 Network controller [0280]: Intel Corporation Wireless-AC 9560 [Jefferson Peak] [8086:a370] (rev 10)
IOMMU group 7 00:15.0 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH Serial IO I2C Controller #0 [8086:a368] (rev 10)
IOMMU group 8 00:16.0 Communication controller [0780]: Intel Corporation Cannon Lake PCH HECI Controller [8086:a360] (rev 10)
IOMMU group 8 00:16.3 Serial controller [0700]: Intel Corporation Cannon Lake PCH Active Management Technology - SOL [8086:a363] (rev 10)
IOMMU group 9 00:17.0 SATA controller [0106]: Intel Corporation Cannon Lake PCH SATA AHCI Controller [8086:a352] (rev 10)

Thanks in advance.
 
Try
Code:
GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on iommu=pt initcall_blacklist=sysfb_init"
Afterwards dont forget
Code:
update-grub
 
Last edited:
When I edit the grub to add iommu=on, the proxmox machine will boot up into proxmox, but without the normal dialogue. I get these faults:

Code:
[0.464865] DMARS: [DMA Read NO_PASID] Request device [00:02.0] fault addr 0xf
bf3f000 [fault reason 0x06) PTE Read access is not set
[0.498020] DMARS: DRHD: handling fault status reg 3
[0.498024] DMARS: [DMA Read NO_PASID] Request device [00:02.0] fault addr 0xf
bf3f000 [fault reason 0x06] PTE Read access is not set
[0.531319] DMARS: DRHD: handling fault status reg 3
[0.531323] DMARS: [DMA Read NO_PASID] Request device [00:02.0] fault addr 0xf
bf3f000 [fault reason 0x06) PTE Read access is not set
[0.564624] DMAR: DRHD: handling fault status reg 3
  Found volume group "pre" using metadata type lvm2
  7 logical volume(s) in volume group "pre" now active
/dev/mapper/pve-root: clean, 50038/6291456 files, 4296881/25165824 blocks.

IOMMU group 2 00:02.0 VGA compatible controller [0300]: Intel Corporation CometLake-S GT2 [UHD Graphics 630] [8086:3e92]
Looks like the integrated graphics does not work well with IOMMU on. Try intel_iommu=igfx_off instead to exclude the integrated graphics from IOMMU. You can also try adding iommu=pt to prevent the IOMMU from using a mapping for non-passed through devices.
 
Try
Code:
GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on iommu=pt initcall_blacklist=sysfb_init"
Afterwards dont forget
Code:
update-grub
That removed the errors, but the NIC while showing up will not connect to the network.

Code:
[    0.000000] Linux version 5.15.74-1-pve (build@proxmox) (gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2) #1 SMP PVE 5.15.74-1 (Mon, 14 Nov 2022 20:17:15 +0100) ()
[    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-5.15.74-1-pve root=/dev/mapper/pve-root ro intel_iommu=on iommu=pt initcall_blacklist=sysfb_init
[    6.178913] vmbr0: port 1(enp5s0f0) entered blocking state
[    6.178932] vmbr0: port 1(enp5s0f0) entered disabled state
[    6.178981] device enp5s0f0 entered promiscuous mode
[    6.402706] vmbr1: port 1(enp5s0f1) entered blocking state
[    6.402724] vmbr1: port 1(enp5s0f1) entered disabled state
[    6.402775] device enp5s0f1 entered promiscuous mode
[    6.614433] vmbr2: port 1(enp6s0f0) entered blocking state
[    6.614451] vmbr2: port 1(enp6s0f0) entered disabled state
[    6.614557] device enp6s0f0 entered promiscuous mode
[    6.822000] vmbr3: port 1(enp6s0f1) entered blocking state
[    6.822019] vmbr3: port 1(enp6s0f1) entered disabled state
[    6.822070] device enp6s0f1 entered promiscuous mode
 
Looks like the integrated graphics does not work well with IOMMU on. Try intel_iommu=igfx_off instead to exclude the integrated graphics from IOMMU. You can also try adding iommu=pt to prevent the IOMMU from using a mapping for non-passed through devices.
This also removed the errors and allowed me to connect to the network, but when I tried to passthrough the NIC to a VM, it said that iommu was not available.
 
Maybe it needs to be intel_iommu=on,igfx_off.
so looks like that does enable iommu, but makes the network unreachable. ran
Code:
lsmod | grep -i iommu
vfio_iommu_type1 ##### 0
vfio #### 2 vfio_pci_core,vfio_iommu_type1

looks like proxmox is adding the PIC to iommu groups.
 
Maybe the network controller just doesn't work with passthrough? It is not uncommon. Do you know of someone having success with passthrough of that particular device?
The IOMMU groups look fine. What does your VM configuration file look like?
Do you early bind the network controller to vfio-pci to make sure Proxmox does not touch it before the VM starts? Check the driver in use with lspci -k after a host reboot and before starting the VM. Sometimes you need a softdep to make sure vfio-pci is loaded before the actual driver.
 
Maybe the network controller just doesn't work with passthrough? It is not uncommon. Do you know of someone having success with passthrough of that particular device?
The IOMMU groups look fine. What does your VM configuration file look like?
Do you early bind the network controller to vfio-pci to make sure Proxmox does not touch it before the VM starts? Check the driver in use with lspci -k after a host reboot and before starting the VM. Sometimes you need a softdep to make sure vfio-pci is loaded before the actual driver.
No idea how to do an early bind.

my lspci -k for the network card is:
Code:
05:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
    Subsystem: Intel Corporation Gigabit ET Quad Port Server Adapter
    Kernel driver in use: igb
    Kernel modules: igb
05:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
    Subsystem: Intel Corporation Gigabit ET Quad Port Server Adapter
    Kernel driver in use: igb
    Kernel modules: igb
06:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
    Subsystem: Intel Corporation Gigabit ET Quad Port Server Adapter
    Kernel driver in use: igb
    Kernel modules: igb
06:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
    Subsystem: Intel Corporation Gigabit ET Quad Port Server Adapter
    Kernel driver in use: igb
    Kernel modules: igb

the VM conf.

Code:
boot: order=virtio0;ide2
cores: 12
hostpci0: 0000:05:00
hostpci1: 0000:05:00
ide2: local:iso/pfSense-CE-2.5.2-RELEASE-amd64.iso,media=cdrom,size=636498K
memory: 8096
meta: creation-qemu=7.1.0,ctime=1678561853
name: pfsense
numa: 0
onboot: 1
ostype: l26
scsihw: virtio-scsi-single
smbios1: uuid=69d28e14-1324-42ee-8c9c-94b35b18b261
sockets: 1
virtio0: local-lvm:vm-100-disk-0,iothread=1,size=32G
vmgenid: a3e45495-de05-474c-9016-1892987eb594

but its not just the VM that isn't getting the network, its the proxmox host.
 
No idea how to do an early bind.
Did you see the options vfio-pci ids=1234:5678,4321:8765 example in the Proxmox manual that I linked to?
my lspci -k for the network card is:
Code:
05:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
    Subsystem: Intel Corporation Gigabit ET Quad Port Server Adapter
    Kernel driver in use: igb
    Kernel modules: igb
05:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
    Subsystem: Intel Corporation Gigabit ET Quad Port Server Adapter
    Kernel driver in use: igb
    Kernel modules: igb
06:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
    Subsystem: Intel Corporation Gigabit ET Quad Port Server Adapter
    Kernel driver in use: igb
    Kernel modules: igb
06:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
    Subsystem: Intel Corporation Gigabit ET Quad Port Server Adapter
    Kernel driver in use: igb
    Kernel modules: igb

the VM conf.

Code:
boot: order=virtio0;ide2
cores: 12
hostpci0: 0000:05:00
hostpci1: 0000:05:00
ide2: local:iso/pfSense-CE-2.5.2-RELEASE-amd64.iso,media=cdrom,size=636498K
memory: 8096
meta: creation-qemu=7.1.0,ctime=1678561853
name: pfsense
numa: 0
onboot: 1
ostype: l26
scsihw: virtio-scsi-single
smbios1: uuid=69d28e14-1324-42ee-8c9c-94b35b18b261
sockets: 1
virtio0: local-lvm:vm-100-disk-0,iothread=1,size=32G
vmgenid: a3e45495-de05-474c-9016-1892987eb594
You are doing passthrough of device 05:00 with all functions (.0 and .1) twice, so that won't work. Simply remove one of them.
but its not just the VM that isn't getting the network, its the proxmox host.
Looks like your Proxmox host also uses 05:00.0 and 05:00.1 for vmbr0 and vmbr1, which is problematic if you want to passthrough those.

Or do the 06:00 and 05:00 network controllers not work before starting the VM? Then I assume that they are incompatible with IOMMU, which is rather unexpected but not impossible. Does your motherboard have SR-IOV support and can you try enabling it? Or maybe try disabling it (which might change the IOMMU groups).
Try search this forum and the internet for Intel 82576 and see if anyone got them working with IOMMU/passthrough? Maybe someone knows how to get it to work.
 
apologizes. did not see the link. That looks like was my issue, soon as I read this line
But, if you pass through a device to a virtual machine, you cannot use that device anymore on the host or in any other VM.
I knew the problem. in testing the 4 port nic out, I had been using it for the host. a couple months past and completely forgot that I had been using the other card. Looks like I'm all set. Many thanks.
 
  • Like
Reactions: leesteken

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!