Unable to passthrough HBA card to TrueNas

birdie16

New Member
Jan 25, 2025
2
0
1
As the title suggests im attempting to passthrough an HBA card to a TrueNas VM and im not having much luck.

Im new to Linux and Proxmox in general so I could just be doing something glaringly wrong but im unsure how to proceed at this point, I can see all the drives in Proxmox in the web gui and using lsblk, they are all detected and can be accessed individually, when I start the VM the boot fails with:

TASK ERROR: start failed: QEMU exited with code 1.

The HDD drives disappear from the web gui, the drives also disappear with lsblk now only showing sde (my boot drive). It should be noted that the HBA is still visible to Proxmox, and i can add and remove it as a device as normal, if i reboot the PC then the drives return.

If the HBA is not added as a PCI passthrough device, the TrueNas VM will boot successfully.

Configuration

Im Running the following specs:
  • Processor - 7th gen intel i5 7400
  • HBA - LSI 9211-8i (flashed to IT mode)
  • Motherboard - Gigabyte GA-H110M-S2H
  • 8 GB RAM
  • Boot SSD (internally conneted to Motherboard)
  • 4 SATA HDD connected to the HBA
I have flashed the HBA to IT mode myself (however this is my first time doing this) and can confirm the following updated firmware below:

sas2flash -listall.PNG

In BIOS I can confirm that my motherboard and CPU support VT-d and it is Enabled, The HBA device is also configurable through a SAS2 MPT Controller inside my motherboard BIOS.

On the Proxmox side
  • My 4 HBA HDD's are labeled sda-sdd
  • My Boot SSD is labeled sde
My GRUB is configured as follows - as per suggestions online
/etc/default/grub

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream"
  • Initially my HBA was listed in the same IOMMU group as another device (6th-10th Gen Core Processor PCIe Controller (x16)) so I added the required "pcie_acs_override=downstream" variable and the IOMMU group was changed to 13
I have also made sure to call update-grub and this was confirmed to be successful

I have also modified /etc/modules to include

vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

TrueNas VM Configuration

BIOS = SeaBIOS
SCSI Controller = VirtIO SCSI single
Qemu Agent = True
Discard = True
IO thread = True
SSD emulation = True
Async IO = Default (io_uring)
4 Core CPU
8GB RAM (This is the max RAM installed into the system)
No changes to Network Settings

In terms of the TrueNas hardware, I can select and add the the HBA:

1737893881792.png
  • All Functions = True
  • For ROM-bar I have tried both True and False as some posts have suggested and this has not made any difference
  • PCI-Express cannot be enabled (Q35 only)
I have seen instances where people pass the drives directly to the VM however this would be my last resort, and would preferably not like to do this because I have access to the HBA, however if this is a physical limitation of course I have no other option.

Any help or insight into this would be much appreciated.
 
Last edited:
Hi,

are there any errors in the system log journalctl -b -e?

Most likely the problem is though that you have assigned 8GB to the VM, but the host also only has 8GB. You can never assign the full amount, as the host obviously also needs something to work with ;)
Especially with passthrough, as that means all memory must be pre-allocated. By chance, that configuration w/o passthrough works, as memory gets dynamically allocated - although at some point, you'd still get OOM problems of course.

I'd first try reducing the memory assigned to the VM to e.g. 4GB and trying again.
 
Hi,

are there any errors in the system log journalctl -b -e?

Most likely the problem is though that you have assigned 8GB to the VM, but the host also only has 8GB. You can never assign the full amount, as the host obviously also needs something to work with ;)
Especially with passthrough, as that means all memory must be pre-allocated. By chance, that configuration w/o passthrough works, as memory gets dynamically allocated - although at some point, you'd still get OOM problems of course.

I'd first try reducing the memory assigned to the VM to e.g. 4GB and trying again.

Changing the RAM allocation to 4GB fixed my issue, (I will likely be upgrading this in future anyway)

I can now interact with the drives in my TrueNas VM.

As a note: If ROM-bar=1 The VM itself would keep booting into the installer and for me did not start the TrueNAS VM, this was solved by just setting this value to 0.

Just for confirmation, I did open the system log at journalctl -b -e and didn't find any errors relating to this issue, other than what had been posted in the web gui: TASK ERROR: start failed: QEMU exited with code 1. (when my RAM allocation was 8GB)

I did however notice the following error appearing a considerable amount:

Jan 28 17:13:32 pve rrdcached[992]: handle_request_update: Could not read RRD file.
Jan 28 17:13:32 pve pmxcfs[1013]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-node/pve: -1
Jan 28 17:13:32 pve pmxcfs[1013]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-node/pve: mmaping file '/var/lib/rrdcached/db/pve2-node/pve': Invalid argument

I opened the file this error is referencing "/var/lib/rrdcached/db/pve2-node/pve" but it was empty. Either way this isn't in reference to the original post and isn't causing me any issues right now anyway.

Cheers!