HDDs not showing up in TrueNAS despite working LSI HBA passthrought

ze_kniv

New Member
May 12, 2025
2
0
1
I Recently got a Dell Poweredge R420 in my possession in order to turn it into my own homelab. First thing I chose to start with is TrueNAS. I did everything as I should:
- I enabled IOMMU
- I flashed my Dell Perc H310 Mini to IT mode
- I set up my VM so that it's perfectly suited for PCI passthrought (also blacklisted the driver so that the card uses vfio)
- and last, but not least, I did PCI passthrought to the VM

However the disks are simply not showing up in TrueNAS, whilst appearing in Proxmox instead.

This is the output of sas2flash -list command in TrueNAS shell:
Adapter Selected is a LSI SAS: SAS2008(B2)

Controller Number : 0
Controller : SAS2008(B2)
PCI Address : 00:01:00:00
SAS Address : 5d4ae52-0-af14-b700
NVDATA Version (Default) : 14.01.00.08
NVDATA Version (Persistent) : 14.01.00.08
Firmware Product ID : 0x2213 (IT)
Firmware Version : 20.00.07.00
NVDATA Vendor : LSI
NVDATA Product ID : SAS9211-8i
BIOS Version : N/A
UEFI BSD Version : N/A
FCODE Version : N/A
Board Name : SAS9211-8i
Board Assembly : N/A
Board Tracer Number : N/A

Finished Processing Commands Successfully.
Exiting SAS2Flash.
This indicates the HBA card got flashed properly.

There are few indicators however that something might not be quite right.

When I use this command in TrueNAS Shell

dmesg | grep -i "hba\|sata\|sas\|disk"

I get

SATA link down (SStatus 0 SControl 300)

in output 5 times (4 times for each HDD and one time for an SSD which I had plugged in instead of CD reader).

Thee output from dmesg | grep -i vfio in PVE is as it says:

[ 13.636840] VFIO - User Level meta-driver version: 0.3
[ 13.662726] vfio_pci: add [1000:0072[ffffffff:ffffffff]] class 0x000000/00000000
[ 43.776988] vfio-pci 0000:01:00.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0xffff

l do not know what's the cause of the last line showing up, but a similar output is provided from journalctl -xe | grep -i vfio:

May 11 00:44:55 hogh kernel: VFIO - User Level meta-driver version: 0.3
May 11 00:44:55 hogh kernel: vfio_pci: add [1000:0072[ffffffff:ffffffff]] class 0x000000/00000000
May 11 00:44:54 hogh systemd-modules-load[577]: Inserted module 'vfio'
May 11 00:44:54 hogh systemd-modules-load[577]: Inserted module 'vfio_pci'
May 11 00:44:54 hogh systemd-modules-load[577]: Failed to find module 'vfio_virqfd'
May 11 00:45:25 hogh QEMU[1793]: kvm: vfio-pci: Cannot read device rom at 0000:01:00.0
May 11 00:45:25 hogh kernel: vfio-pci 0000:01:00.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0xffff
And then there's also a result of update-initramfs -u -k -all:

update-initramfs: Generating /boot/initrd.img-6.8.12-10-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
Couldn't find EFI system partition. It is recommended to mount it to /boot or /efi.
Alternatively, use --esp-path= to specify path to mount point.
update-initramfs: Generating /boot/initrd.img-6.8.12-9-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
Couldn't find EFI system partition. It is recommended to mount it to /boot or /efi.
Alternatively, use --esp-path= to specify path to mount point.

I tried to solve this by mounting the partition, doing the command again and then unmounting it, but then I wouldn't receive any response in the output.

So that'a what I am stuck with so far. I am out of ideas of what to do and I would deeply appreciate some help regarding that.
 
Alright, so what I managed to figure out is that the possible cause of this might be that I do not have Proxmox running in EFI mode. However now I am stuck at initiating proxmox-boot-tool init /dev/my-efi-partition command:

Code:
root@hogh:~# proxmox-boot-tool init /dev/sde2
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
UUID="85B1-47E6" SIZE="1073741824" FSTYPE="vfat" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="sde" MOUNTPOINT=""
Mounting '/dev/sde2' on '/var/tmp/espmounts/85B1-47E6'.
Installing grub x86_64 target..
Installing for x86_64-efi platform.
Installation finished. No error reported.
Installing grub x86_64 target (removable)..
Installing for x86_64-efi platform.
Installation finished. No error reported.
Unmounting '/dev/sde2'.
Adding '/dev/sde2' to list of synced ESPs..
Refreshing kernels and initrds..
Running hook script 'proxmox-auto-removal'..
Running hook script 'zz-proxmox-boot'..
No root= parameter in /etc/kernel/cmdline found!

I added root=UUID=efi_partition_uuid line into the file, but the error still persists. Can someone help me on how to solve this?