Setup
I've recently put together a home server with some new and used parts for a couple of bucks.
The first thing I installed was Proxmox. It's my first experience with proxmox, and until now, it has been very pleasant. I installed a couple of VMs:
1. Tailscale VM
2. Batocera VM with passthrough GPU
3. TrueNAS VM
Hardware
I was planning on using TrueNAS as my NAS instead of using proxmox itself. I've read only that instead of passing through individual drives, I should buy an HBA card so I can pass the PCIe Slot, similar to my GPU to the Batocera VM.
So I bought an LSI 9302-8i HBA, which was flashed in IT mode (I have no idea what this means; the seller told me that, and it might be some useful information). Here's also a list of my other hardware:
- GIGABYTE B550M AORUS ELITE mATX
- AMD Ryzen 5 5600G
- ASUS GeForce GTX 1060 6GB
- LSI 9302-8i
The other stuff is probably not noteworthy.
Problem
So, the problem is pretty simple. As soon as I add the HBA card to the TrueNAS VM, the entire proxmox server crashes. It doesn't shut down or anything, but it just stops working. The Web UI becomes unresponsive, and when I attach the server to a VM to access the console, I can't write anything. It shows some logs, though:
The first line is different a lot of times. The second line always says the same, though.
Unfortunately, I had TrueNAS configured to start on Proxmox Boot, so Proxmox was pretty much useless after the first time I passed the slot through. I tried changing the VM config to disable Start on Boot but couldn't figure out where the file was stored (I tried both the recovery mode and simply mounting the OS drive onto another system. Anyway, I lost my patience and simply reinstalled everything. This has nothing to do with this; it's just a little rant.
What I've tried
I've tried pretty much every checkbox in the options menu when adding the card. Every combination. Nothing worked.
Searching the log above online led me to believe at first that my SSD was bad, but as I later found out, it's definitely passing the HBA to the VM.
My Mainboard has two x16 PCIe slots (I don't know if they both allow full bandwidth, but I could place two full GPUs onto it). Online, I've read about IOMMU Groups and that my Mainboard or, in general, the B550 Chipset has some pretty bad groups. As far as I understand, my GPU and HBA must be in different groups so I can pass them each into an individual VM.
I tried to search anywhere online for some sort of solution, but I just can't find anything. Is it really my mainboard's IOMMU groups? I have no knowledge of even what that means, and I really don't want to buy another mainboard.
Thanks for any input.
I've recently put together a home server with some new and used parts for a couple of bucks.
The first thing I installed was Proxmox. It's my first experience with proxmox, and until now, it has been very pleasant. I installed a couple of VMs:
1. Tailscale VM
2. Batocera VM with passthrough GPU
3. TrueNAS VM
Hardware
I was planning on using TrueNAS as my NAS instead of using proxmox itself. I've read only that instead of passing through individual drives, I should buy an HBA card so I can pass the PCIe Slot, similar to my GPU to the Batocera VM.
So I bought an LSI 9302-8i HBA, which was flashed in IT mode (I have no idea what this means; the seller told me that, and it might be some useful information). Here's also a list of my other hardware:
- GIGABYTE B550M AORUS ELITE mATX
- AMD Ryzen 5 5600G
- ASUS GeForce GTX 1060 6GB
- LSI 9302-8i
The other stuff is probably not noteworthy.
Problem
So, the problem is pretty simple. As soon as I add the HBA card to the TrueNAS VM, the entire proxmox server crashes. It doesn't shut down or anything, but it just stops working. The Web UI becomes unresponsive, and when I attach the server to a VM to access the console, I can't write anything. It shows some logs, though:
Code:
EXT4-fs error (device dm-1) in ext4_reserve_inode_write:5735: IO failure
EXT4-fs (dm-1): Remounting filesystem read-only
The first line is different a lot of times. The second line always says the same, though.
Unfortunately, I had TrueNAS configured to start on Proxmox Boot, so Proxmox was pretty much useless after the first time I passed the slot through. I tried changing the VM config to disable Start on Boot but couldn't figure out where the file was stored (I tried both the recovery mode and simply mounting the OS drive onto another system. Anyway, I lost my patience and simply reinstalled everything. This has nothing to do with this; it's just a little rant.
What I've tried
I've tried pretty much every checkbox in the options menu when adding the card. Every combination. Nothing worked.
Searching the log above online led me to believe at first that my SSD was bad, but as I later found out, it's definitely passing the HBA to the VM.
My Mainboard has two x16 PCIe slots (I don't know if they both allow full bandwidth, but I could place two full GPUs onto it). Online, I've read about IOMMU Groups and that my Mainboard or, in general, the B550 Chipset has some pretty bad groups. As far as I understand, my GPU and HBA must be in different groups so I can pass them each into an individual VM.
I tried to search anywhere online for some sort of solution, but I just can't find anything. Is it really my mainboard's IOMMU groups? I have no knowledge of even what that means, and I really don't want to buy another mainboard.
Thanks for any input.