[SOLVED] R710 PCI passthrough for HBA card with X5675 processors?

verulian

Well-Known Member
Feb 18, 2019
179
21
58
44
EDIT + SOLUTION: if you are having trouble with PCI passthrough on your system and know/believe it should work, be sure to know if you are using GRUB or Systemd-boot first. If you are using UEFI you are probably on Systemd-boot and need to edit /etc/kernel/cmdline and issue pve-efiboot-tool refresh and then reboot to see your IOMMU properly active and ready for passthrough.

And check /proc/cmdline (cat /proc/cmdline) to make sure your boot arguments to the Linux kernel are active.


Original question:

I've followed a couple guides and troubleshooting processes thus far and I'm not really finding a way to fully enable IOMMU in Proxmox 7.

/etc/default/grub has (where the PCI ID is for the HBA I'm using):

Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt vfio_iommu_type1.allow_unsafe_interrupts=1 vfio-pci.ids=1000:0087"

I have this in /etc/modules:
Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd


dmesg | grep -e DMAR -e IOMMU shows:
Code:
[    0.012981] ACPI: DMAR 0x00000000CF3B3668 0001C0 (v01 DELL   PE_SC3   00000001 DELL 00000001)
[    0.013035] ACPI: Reserving DMAR table memory at [mem 0xcf3b3668-0xcf3b3827]
[    0.588752] DMAR-IR: This system BIOS has enabled interrupt remapping
[    1.731012] DMAR: Host address width 40
[    1.731017] DMAR: DRHD base: 0x000000fed90000 flags: 0x1
[    1.731043] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap c90780106f0462 ecap f020fe
[    1.731049] DMAR: RMRR base: 0x000000cf4c8000 end: 0x000000cf4dffff
[    1.731053] DMAR: RMRR base: 0x000000cf4b1000 end: 0x000000cf4bffff
[    1.731056] DMAR: RMRR base: 0x000000cf4a1000 end: 0x000000cf4a1fff
[    1.731063] DMAR: RMRR base: 0x000000cf4a3000 end: 0x000000cf4a3fff
[    1.731067] DMAR: RMRR base: 0x000000cf4a5000 end: 0x000000cf4a5fff
[    1.731070] DMAR: RMRR base: 0x000000cf4a7000 end: 0x000000cf4a7fff
[    1.731073] DMAR: RMRR base: 0x000000cf4c0000 end: 0x000000cf4c0fff
[    1.731077] DMAR: RMRR base: 0x000000cf4c2000 end: 0x000000cf4c2fff

Not sure if helpful, but dmesg | grep iommu shows:
Code:
[    0.936773] iommu: Default domain type: Translated

And of course the web GUI keeps telling me:
Code:
Add: PCI Device
No IOMMU detected, please activate it.See Documentation for further information.


I thought perhaps the Xeon X5675 might not be fully IOMMU compatible, but people have been talking about using these in conjunction with an R710 first gen to accomplish PCI passthrough, so I'm highly flustered!
 
Last edited:
Very interesting @avw thank you for making this observation. So far while I have found maybe a couple people talking about it, each one hasn't seemed to be the actual problem with the R710. I'll keep digging and researching. It does sound similar though, but strange nonetheless, since I am seeing this:
dmesg | grep 'remapping'
Code:
[    0.588752] DMAR-IR: This system BIOS has enabled interrupt remapping
               interrupt remapping is being disabled.  Please

But this person says they already inserted what I inserted above and had success.

As per this suggestion
1) Run the "dmesg | grep ecap" command.

2) On the IOMMU lines, the hexadecimal value after "ecap" indicates whether interrupt remapping is supported. If the last character of this value is an 8, 9, a, b, c, d, e, or an f, interrupt remapping is supported. For example, "ecap 1000" indicates there is no interrupt remapping support. "ecap 10207f" indicates interrupt remapping support, as the last character is an "f".
I do see:
dmesg | grep ecap
Code:
[    1.731043] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap c90780106f0462 ecap f020fe

So since the number after `ecap` is f020fe (the final "E" specifically) that would mean remapping is supported.

I am, however, curious though since I'm not seeing that my kernel command line is active:
cat /proc/cmdline:
Code:
initrd=\EFI\proxmox\5.11.22-2-pve\initrd.img-5.11.22-2-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs

Whereas cat /etc/default/grub | grep GRUB_CMDLINE_LINUX shows:
Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt vfio_iommu_type1.allow_unsafe_interrupts=1 vfio-pci.ids=1000:0087"
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs"

Similarly, if I just look at dmesg I don't see the rest of the args showing there either:
Code:
[    0.000000] Linux version 5.11.22-2-pve (build@proxmox) (gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2) #1 SMP PVE 5.11.22-4 (Tue, 20 Jul 2021 21:40:02 +0200) ()
[    0.000000] Command line: initrd=\EFI\proxmox\5.11.22-2-pve\initrd.img-5.11.22-2-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs
[    0.000000] KERNEL supported cpus:
[    0.000000]   Intel GenuineIntel
[    0.000000]   AMD AuthenticAMD
[    0.000000]   Hygon HygonGenuine
...
 
Last edited:
I am, however, curious though since I'm not seeing that my kernel command line is active:
cat /proc/cmdline:
Code:
initrd=\EFI\proxmox\5.11.22-2-pve\initrd.img-5.11.22-2-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs

Whereas cat /etc/default/grub | grep GRUB_CMDLINE_LINUX shows:
Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt vfio_iommu_type1.allow_unsafe_interrupts=1 vfio-pci.ids=1000:0087"
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs"
If you have root on ZFS and booting with UEFI, you need to change /etc/kernel/cmdline. More information is in the manual.
 
Appreciate your pointing that out @avw. Not sure if this has any bearing, but I am booting to a ZFS RAID-1 mirror arrangement from a PCI card. Couldn't quite tell where the boot sectors are kept (probably both drives), so I didn't isolate the bootloader being used initially and made an assumption after the following:

I'm booting with UEFI, but I wasn't aware that GRUB was not apparently used in this case... (?) From what I've done as per the manual (sanity check), I'm clearly booting UEFI, efibootmgr -v:
Code:
BootCurrent: 0004
Timeout: 0 seconds
BootOrder: 0004,0003,0001,0002,0000,0005
Boot0000* EFI Fixed Disk Boot Device 1  PcieRoot(0x0)/Pci(0x7,0x0)/Pci(0x0,0x0)/Sata(0,0,0)/HD(2,GPT,d2207ae7-c150-45b8-9239-16796d2d526f,0x800,0x100000)
Boot0001  MAS001                PcieRoot(0x0)/Pci(0x1d,0x7)/USB(2,0)/Unit(0)
Boot0002  MAS002                PcieRoot(0x0)/Pci(0x1d,0x7)/USB(2,0)/Unit(1)
Boot0003* Linux Boot Manager    HD(2,GPT,d2207ae7-c150-45b8-9239-16796d2d526f,0x800,0x100000)/File(\EFI\systemd\systemd-bootx64.efi)
Boot0004* Linux Boot Manager    HD(2,GPT,0ecf17d4-bbe1-4043-9988-4b4892698629,0x800,0x100000)/File(\EFI\systemd\systemd-bootx64.efi)
Boot0005* EFI Fixed Disk Boot Device 2  PcieRoot(0x0)/Pci(0x7,0x0)/Pci(0x0,0x0)/Sata(1,0,0)/HD(2,GPT,0ecf17d4-bbe1-4043-9988-4b4892698629,0x800,0x100000)

proxmox-boot-tool status:
Code:
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
E442-FB73 is configured with: uefi (versions: 5.11.22-1-pve, 5.11.22-2-pve)
E443-B706 is configured with: uefi (versions: 5.11.22-1-pve, 5.11.22-2-pve)

As obvious from my previous remarks, I had modified both the GRUB_CMDLINE_LINUX_DEFAULT and GRUB_CMDLINE_LINUX with neither additional arguments showing up in /proc/cmdline.

I then assumed that Systemd-boot is being used (I guess Proxmox always uses Systemd-boot for UEFI?) and subsequently modified nano /etc/kernel/cmdline with the same postfixed arguments that I was trying to use for GRUB.

After issuing a proxmox-boot-tool refresh and normal-reboot (since I'm using the kexec fast reboot for NON-boot parameter changing updates), IT WORKED!!!!!!

Thank you again for your pointing this out.

I suppose the wiki maybe should somehow include a sanity check step to help the reader to know which bootloader they're on. I obviously had assumed Proxmox 7 was using GRUB's x86_64-efi version.
 
  • Like
Reactions: leesteken

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!