Hello ! I spent hours trying to figure out how to get this to work, as I'm a new to ubuntu servers and Proxmox. This thread is the top thread that pops up when you google PCI Passthrough Dell r710 lol, I finally got it to work so I figured I'd share instructions for everyone else having this issue. Specifically I was trying to do PCI Passthrough to my LSI 2308 HBA card to my TrueNAS Scale VM with the intention of using TrueNAS software raid.
Please note I did use Google Gemini AI to assist in creating these instructions, but I have reviewed it all prior to posting.
This is the error I was getting in Promxox when starting TrueNAS Scale with my HBA configured as a PCI Device
kvm: -device vfio-pci,host=0000:04:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio 0000:04:00.0: failed to setup container for group 16: Failed to set iommu for container: Operation not permitted
Instructions for PCI Passthrough of HBA Controller on Proxmox 8.4.0 (Dell R710)(Latest Bios at time of post)
Phase 1: BIOS/Firmware Configuration
- Reboot your Dell R710 server.
- Enter BIOS Setup: During boot, repeatedly press the F2 key (or whatever key is indicated for BIOS setup on your screen).
- Navigate to Virtualization Settings: Look for "Processor Settings
- Enable Intel VT-d / IOMMU: Find the option labeled "Intel VT-d," "Virtualization Technology for Directed I/O," or similar. Ensure it is set to "Enabled."
- Important: If this option is not present or cannot be enabled, your CPU or motherboard may not support IOMMU, or it might be a different Dell R710 configuration.
- Save and Exit: Save your BIOS changes and exit. The server will reboot.
Phase 2: Proxmox Host Configuration (SSH/Console)
- Access your Proxmox server via SSH or directly on the console.
- Verify Intel Virtualization Technology and IOMMU Support is enabled within BIOS:
- Run the command:
dmesg | grep -e DMAR -e IOMMU
- Look for output indicating IOMMU is enabled, such as "
DMAR: IOMMU enabled
." If you don't see this, IOMMU might not be properly enabled in BIOS or there's another issue. (Revert back to Phase 1)
- Enable IOMMU in Proxmox Bootloader: (Additional Info regarding this patch can be found here
- Open the GRUB configuration file:
nano /etc/default/grub
- Find the line
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
- Replace the entire line with
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on,relax_rmrr iommu=pt intremap=no_x2apic_optout"
- Save the file (Ctrl+O, Enter) and exit (Ctrl+X).
- Update GRUB:
update-grub
- Load VFIO Modules at Boot:
- Open the modules file for editing:
nano /etc/modules
- Add the following lines to the end of the file:
-
Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
- Save the file (Ctrl+O, Enter) and exit (Ctrl+X).
- Blacklist the HBA's Native Driver (I personally did not do this and had no issues, but Gemini AI recommended to, so I'm leaving the instructions here, maybe it will work for someone...):
- You need to identify the driver your HBA card uses so Proxmox doesn't try to claim it.
- Run:
lspci -nnk
- Look for your HBA controller in the output. It might be listed as a "RAID bus controller" or "Serial Attached SCSI controller." Note the "Kernel driver in use:" line. Common drivers for LSI/Broadcom HBAs are mpt3sas or megaraid_sas.
- Create a new modprobe blacklist file:
nano /etc/modprobe.d/blacklist.conf
- Add the following line, replacing your_hba_driver with the driver name you found (e.g., mpt3sas or megaraid_sas):
blacklist your_hba_driver
- Save the file (Ctrl+O, Enter) and exit (Ctrl+X).
- Update Initramfs:
update-initramfs -u -k all
This command applies the module changes and blacklist:
- Identify HBA PCI ID and IOMMU Group:
- Reboot your Proxmox server for all changes to take effect: reboot
- After reboot, log back in via SSH/console.
- Find your HBA's PCI address
lspci -nn
(e.g., 0000:05:00.0):
- Look for your HBA again. You'll see an address like 05:00.0 or 00:05.0. The 0000: prefix is usually added for pci_id.
- Note down the vendor:device ID, for example, [1000:0087].
- Find its IOMMU Group:
- Replace 0000:05:00.0 with your HBA's actual PCI address:
for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*}; printf 'IOMMU group %s ' "$n"; lspci -nns "${d##*/}"; done | grep YOUR_HBA_PCI_ID
- You will get a response like this
IOMMU group 16 04:00.0 Serial Attached SCSI controller [0107]: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 [1000:0087] (rev 05)
- As you can see my IOMMU group is 16
- Now view all IOMMU Groups
for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*}; printf 'IOMMU group %s ' "$n"; lspci -nns "${d##*/}"; done
- Now Confirm that in your devices IOMMU group that there are no other devices in that group. If there are other devices in that group, I suggest switching which PCI slot your HBA is attached to, as multiple devices in one IOMMU group for PCI passthrough will cause issues with all devices in that group.
- Edit you VFIO config file:
nano /etc/modprobe.d/vfio.conf
- Ensure it contains only these lines
-
Code:
options vfio-pci ids=(insert your PCI ID here)
options vfio_iommu_type1 allow_unsafe_interrupts=1
- For example mine looked like this
-
Code:
options vfio-pci ids=1000:0087
options vfio_iommu_type1 allow_unsafe_interrupts=1
- Be sure to correct the pci ids with the PCI number you got in the previous step
- Remove the options
vfio_iommu_type1 relax_rmrr=1 line
, as that specific syntax for relax_rmrr is not supported by the module). Save the file (Ctrl+O, Enter, Ctrl+X).
- Update Initramfs (Crucial!)
update-initramfs -u -k all
- Reboot you proxmox host
reboot
- Verify Kernal Command Line
cat /proc/cmdline
You must now see intel_iommu=on iommu.relax_rmrr=1 iommu=pt intremap=no_x2apic_optout
in this output. If iommu.relax_rmrr=1
is missing or marked "Unknown," then this syntax also fails for your kernel.
Phase 3: Virtual Machine Configuration (Proxmox Web UI)
- Open your Proxmox web interface.
- Select the Virtual Machine: Click on the VM you intend to pass the HBA to (e.g., your TrueNAS VM).
- Go to Hardware: Navigate to the "Hardware" tab of the VM.
- Add PCI Device: Click "Add" -> "PCI Device."
- Select your HBA:
- In the "PCI Device" dropdown, select the PCI address corresponding to your HBA controller (e.g., 0000:04:00.0 SAS-2308 PCI-Express Fusion-MPT SAS-2).
- All Functions: Leave this checked.
- Primary GPU: Uncheck this
- PCI-Express: Check this option. This is generally recommended for better compatibility and performance with newer guest OSes.
- ROM-bar: Uncheck this. Though some HBAs require it for proper initialization. (Mine did not, I would recommend trying this unchecked first)
- On your Proxmox host, run: echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/vfio-unsafe.conf
- Then: update-initramfs -u -k all
- Then: reboot the Proxmox host.
- Warning: This is generally considered less secure and should only be used if absolutely necessary after all other methods fail.
- Set VM Machine Type (if not already set):
- Go to the "Options" tab for your VM.
- Change the "Machine" type to q35. This provides better PCIe passthrough support. Also set vIOMMU to Intel (AMD Compatible)
- Start the VM: Start your virtual machine. If the passthrough was successful, the guest OS (e.g., TrueNAS) should now see the raw disks connected to your HBA directly!! - Hope it worked ! Enjoy !
Troubleshooting Tips:
- Check Proxmox Logs: If the VM fails to start or the HBA isn't recognized, check the Proxmox system logs: journalctl -xe or dmesg. Look for messages related to vfio, iommu, or the PCI address of your HBA. (Also don’t forget to try again with ROM-Bar Toggled or Untoggled as previously mentioned)
HBA "IT Mode": Ensure your HBA is actually in "IT Mode" (Initiator-Target mode). Many RAID controllers require a firmware flash to disable their RAID functionality and act as a simple HBA for passthrough. If your HBA is still in RAID mode, the VM won't see individual disks but rather the RAID volumes (which is not ideal for ZFS/TrueNAS).
Final Notes:
If anyone has any corrections to these instructions, or can provide tips please do so as I'm new to linux server and proxmox, this is just what worked for me ! Thanks !