Passing PCIe slot of HBA to TrueNAS VM crashes Proxmox

balte

New Member
Mar 29, 2025
4
0
1
Setup

I've recently put together a home server with some new and used parts for a couple of bucks.

The first thing I installed was Proxmox. It's my first experience with proxmox, and until now, it has been very pleasant. I installed a couple of VMs:
1. Tailscale VM
2. Batocera VM with passthrough GPU
3. TrueNAS VM

Hardware

I was planning on using TrueNAS as my NAS instead of using proxmox itself. I've read only that instead of passing through individual drives, I should buy an HBA card so I can pass the PCIe Slot, similar to my GPU to the Batocera VM.

So I bought an LSI 9302-8i HBA, which was flashed in IT mode (I have no idea what this means; the seller told me that, and it might be some useful information). Here's also a list of my other hardware:
- GIGABYTE B550M AORUS ELITE mATX
- AMD Ryzen 5 5600G
- ASUS GeForce GTX 1060 6GB
- LSI 9302-8i

The other stuff is probably not noteworthy.

Problem

So, the problem is pretty simple. As soon as I add the HBA card to the TrueNAS VM, the entire proxmox server crashes. It doesn't shut down or anything, but it just stops working. The Web UI becomes unresponsive, and when I attach the server to a VM to access the console, I can't write anything. It shows some logs, though:

Code:
EXT4-fs error (device dm-1) in ext4_reserve_inode_write:5735: IO failure
EXT4-fs (dm-1): Remounting filesystem read-only

The first line is different a lot of times. The second line always says the same, though.

Unfortunately, I had TrueNAS configured to start on Proxmox Boot, so Proxmox was pretty much useless after the first time I passed the slot through. I tried changing the VM config to disable Start on Boot but couldn't figure out where the file was stored (I tried both the recovery mode and simply mounting the OS drive onto another system. Anyway, I lost my patience and simply reinstalled everything. This has nothing to do with this; it's just a little rant.


What I've tried

I've tried pretty much every checkbox in the options menu when adding the card. Every combination. Nothing worked.
Searching the log above online led me to believe at first that my SSD was bad, but as I later found out, it's definitely passing the HBA to the VM.

My Mainboard has two x16 PCIe slots (I don't know if they both allow full bandwidth, but I could place two full GPUs onto it). Online, I've read about IOMMU Groups and that my Mainboard or, in general, the B550 Chipset has some pretty bad groups. As far as I understand, my GPU and HBA must be in different groups so I can pass them each into an individual VM.

I tried to search anywhere online for some sort of solution, but I just can't find anything. Is it really my mainboard's IOMMU groups? I have no knowledge of even what that means, and I really don't want to buy another mainboard.

Thanks for any input.
 
- GIGABYTE B550M AORUS ELITE mATX
So, the problem is pretty simple. As soon as I add the HBA card to the TrueNAS VM, the entire proxmox server crashes. It doesn't shut down or anything, but it just stops working. The Web UI becomes unresponsive, and when I attach the server to a VM to access the console, I can't write anything.
It's a IOMMU group issue: https://pve.proxmox.com/wiki/PCI_Passthrough#Verify_IOMMU_isolation (see also the manual: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_configuration_14 ).
The B550 chipset can only passthrough the first x16 PCIe slot and possibly one x4 M.2 slot. All other PCIe slots (and all devices not connected directly to the CPU) are in one big IOMMU group, which cannot be shared between VMs and/or the Proxmox host. Lots of threads about that on this forum.
 
It's a IOMMU group issue: https://pve.proxmox.com/wiki/PCI_Passthrough#Verify_IOMMU_isolation (see also the manual: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_configuration_14 ).
The B550 chipset can only passthrough the first x16 PCIe slot and possibly one x4 M.2 slot. All other PCIe slots (and all devices not connected directly to the CPU) are in one big IOMMU group, which cannot be shared between VMs and/or the Proxmox host. Lots of threads about that on this forum.
Hi @leesteken. Thank you very much for your answer.

If I understand you correctly, the first PCIe slot is in one group, and all of the other slots are in a separate group? Or can the first slot be passed through but the rest not because that group also includes the first slot?

Thanks again for the help.
 
If I understand you correctly, the first PCIe slot is in one group, and all of the other slots are in a separate group? Or can the first slot be passed through but the rest not because that group also includes the first slot?
I don't fully understand your question, sorry. Searching the internet for "IOMMU groups" seems to return some good documentation about what they are and what they are for. You can check your IOMMU groups yourself: https://pve.proxmox.com/wiki/PCI_Passthrough#Verify_IOMMU_isolation

Because of the grouping (determined by the motherboard PCIe layout and BIOS), in practice passthrough only works with the first PCIe slot and maybe the first M.2 slot. You can try all other PCIe slots but in my experience you can save yourself the trouble: Put the HBA card in the first x16 PCIe slot if you want to pass it through to a VM.
 
I don't fully understand your question, sorry. Searching the internet for "IOMMU groups" seems to return some good documentation about what they are and what they are for. You can check your IOMMU groups yourself: https://pve.proxmox.com/wiki/PCI_Passthrough#Verify_IOMMU_isolation

Because of the grouping (determined by the motherboard PCIe layout and BIOS), in practice passthrough only works with the first PCIe slot and maybe the first M.2 slot. You can try all other PCIe slots but in my experience you can save yourself the trouble: Put the HBA card in the first x16 PCIe slot if you want to pass it through to a VM.
Okay... yeah, it seems like I have the wrong motherboard for the job. I'm just going to install TrueNAS bare metal and run my VMs inside of it. I have to accept the small performance trade-off, but at least it supports PCIe passthrough. Third time I'm reinstalling the entire system haha

Thank you.