IOMMU Support for Z840

tarantula78

New Member
Dec 25, 2025
1
0
1
Hi All, I have been trying to get this working for the better part of a year and half. I would work on the issue for many hours across a few days give up for a few months then come back again. But after having updated the bios to the latest version provided by HP (2.60) I can honestly say I have now exhausted all options, all tutorials and all configs. If someone can provide some additional support on this I would be very very grateful. If there is nothing else to be done more then what has already been done I will assume I have some sort of faulty motherboard or other hardware issue. No matter what happens I always have Proxmox telling me "No IOMMU detected".

So lets start with some details, my configuration:

Hardware and Kernel:
Code:
CPU(s) 48 x Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz (2 Sockets)
Kernel Version Linux 6.17.4-1-pve (2025-12-03T15:42Z)
Boot Mode EFI

PVE Version: 9.1.4

Bios settings enabled:
Intel Virtualization VT-x and VT-D
PCI ACS

Current /proc/cmdline:
initrd=\EFI\proxmox\6.17.4-1-pve\initrd.img-6.17.4-1-pve root=/dev/pve/root intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction

**EDIT NOTE**: I have used with and without the PCIE patch this is just the current state.

Code:
root@pve:~# dmesg | grep -i dmar
[    0.316287] DMAR: IOMMU enabled
root@pve:~# dmesg | grep -i remapping
[    0.779139] x2apic: IRQ remapping doesn't support X2APIC mode

/etc/modules
Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

Code:
root@pve:~# lsmod | grep vfio
vfio_pci               20480  0
vfio_pci_core          86016  1 vfio_pci
irqbypass              16384  2 vfio_pci_core,kvm
vfio_iommu_type1       49152  0
vfio                   65536  3 vfio_pci_core,vfio_iommu_type1,vfio_pci
iommufd               126976  1 vfio

Code:
root@pve:~# pvesh get /nodes/pve/hardware/pci --pci-class-blacklist ""
All IOMMU groups report as -1

Here are some of the many tutorials I have tried:
https://pve.proxmox.com/wiki/PCI_Passthrough#Verifying_IOMMU_parameters
https://pve.proxmox.com/wiki/PCI_Passthrough
Various reddit threads/guides such as https://www.reddit.com/r/Proxmox/comments/1mjer1r/solved_proxmox_84_90_gpu_passthrough_host_freeze/
https://www.reddit.com/r/Proxmox/comments/1exjm1y/no_iommu_detected_please_activate_it/

Initially the goal was to pass through the GPU using kernel params like video=efifb:off video=vesafb:off but now i have removed these parameters until i can see IOMMU groups are actually being created then i can move them back again. For now if I can just passthrough my network cards to my OPNSense I will be happy.

I wonder if there is a weird limitation where PCI Passthrough wont work if there is multiple CPUs? I think most of the threads that I have read online have shown people with single CPU setups, but this is just me grasping at straws.

The installation of Proxmox is on a single SSD while there is a separate ZFS pool created of other hard disks which handle things like larger data storage requirements, backups etc. This is in a RAIDZ-2 configuration.

Again, if anyone assist I would be very grateful. I was tempted to install a vanilla Debian or Ubuntu and see if I can pass through stuff maybe I need a fresh install?
 
Last edited: