Can't pci passthrough SAS 9600-24i fully?

dantonbryans

New Member
Nov 25, 2023
1
0
1
Xpost from Reddit (https://www.reddit.com/r/Proxmox/comments/183rnv3/cant_pci_passthrough_sas_960024i_fully/)

I'm installing a Broadcom 9600-24i HBA card and having issues getting drives to recognize in Linux, (but not Windows). So far I've tracked down that when I run the card bare metal, UnRAID, Ubuntu 22.04 Live USB are able to see the card and connected drives. But when I virtualize, there seems to be some error loading the mpi3mr driver and using the disks. This seems to happen in my original hypervisor (XCP-ng); I decided to try finally switching over to Proxmox, and it also is happening here.

Eg bare metal ubuntu 22.04 live usb, unraid, and PVE 8.1.3 'LSPCI -v' output:

Code:
05:00.0 RAID bus controller: Broadcom / LSI Fusion-MPT 24GSAS/PCIe SAS40xx (rev 01)
        Subsystem: Broadcom / LSI eHBA 9600-24i Tri-Mode Storage Adapter
        Flags: bus master, fast devsel, latency 0, IOMMU group 14
        Memory at f0000000 (64-bit, prefetchable) [size=16K]
        Expansion ROM at f7c00000 [disabled] [size=512K]
        Capabilities: [40] Power Management version 3
        Capabilities: [48] MSI: Enable- Count=1/32 Maskable+ 64bit+
        Capabilities: [68] Express Endpoint, MSI 00
        Capabilities: [a4] MSI-X: Enable+ Count=128 Masked-
        Capabilities: [b0] Vital Product Data
        Capabilities: [100] Device Serial Number 00-80-5e-2a-a9-a8-85-18
        Capabilities: [fb4] Advanced Error Reporting
        Capabilities: [138] Power Budgeting <?>
        Capabilities: [db4] Secondary PCI Express
        Capabilities: [af4] Data Link Feature <?>
        Capabilities: [d00] Physical Layer 16.0 GT/s <?>
        Capabilities: [d40] Lane Margining at the Receiver <?>
        Capabilities: [160] Dynamic Power Allocation <?>
        Kernel driver in use: mpi3mr
        Kernel modules: mpi3mr

Bare metal 'fdisk -l' output:

Code:
...
Disk /dev/sdc: 20.01 TiB, 22000969973760 bytes, 42970644480 sectors
Disk model: ST22000NM001E-3H
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
...

Tiny11 (Windows 11) VM in Proxmox automatically detects the Storage Controller in Device Manager perfectly and sees all the drives in Disk Management. Didn't even have to pull drivers or anything.

But, when I passthrough the card as a raw device to a new Ubuntu 22.04.3 VM in Proxmox 8.1.3, 'lspci -v' output is:

Code:
00:10.0 RAID bus controller: Broadcom / LSI Fusion-MPT 24GSAS/PCIe SAS40xx (rev 01)
        Subsystem: Broadcom / LSI eHBA 9600-24i Tri-Mode Storage Adapter
        Physical Slot: 16
        Flags: fast devsel
        Memory at fd600000 (64-bit, prefetchable) [size=16K]
        Expansion ROM at fea00000 [disabled] [size=512K]
        Capabilities: <access denied>
        Kernel driver in use: mpi3mr
        Kernel modules: mpi3mr
(Note: I think the access denied was bc I forgot to sudo. I can regrab if needed but it's same I believe.)

VM 'dmesg | grep LSI' output:

Code:
[    1.047938] Loading mpi3mr version 8.0.0.69.0
[    1.048344] mpi3mr 0000:00:10.0: osintfc_mrioc_security_status: PCI_EXT_CAP_ID_DSN is not supported
[    1.050331] mpi3mr 0000:00:10.0: Driver probe function unexpectedly returned 1

And 'fdisk -l' only has the QEMU drive and loopbacks.

Thought maybe it needed "Full capabilities" checked in passthrough:

Code:
00:10.0 RAID bus controller: Broadcom / LSI Fusion-MPT 24GSAS/PCIe SAS40xx (rev 01)
        Subsystem: Broadcom / LSI eHBA 9600-24i Tri-Mode Storage Adapter
        Physical Slot: 16
        Flags: fast devsel
        Memory at fd600000 (64-bit, prefetchable) [size=16K]
        Expansion ROM at fea00000 [disabled] [size=512K]
        Capabilities: [40] Power Management version 3
        Capabilities: [48] MSI: Enable- Count=1/32 Maskable+ 64bit+
        Capabilities: [68] Express Endpoint, MSI 00
        Capabilities: [a4] MSI-X: Enable- Count=128 Masked-
        Capabilities: [b0] Vital Product Data
        Kernel driver in use: mpi3mr
        Kernel modules: mpi3mr

Full capabilities 'dmesg | grep mpi' output:

Code:
[    0.996084] Loading mpi3mr version 8.0.0.69.0
[    0.996807] mpi3mr 0000:00:10.0: osintfc_mrioc_security_status: PCI_EXT_CAP_ID_DSN is not supported
[    0.998141] mpi3mr 0000:00:10.0: Driver probe function unexpectedly returned 1

And 'fdisk -l' only has the QEMU drive and loopbacks again.

I've tried with ROM-Bar on and off. GUI pics: https://imgur.com/a/mcMqyo8

I followed through a couple other LSI, etc. passthrough posts on here and reddit, and tried adding in /etc/modprobe.d/passthrough.conf

Code:
blacklist mpi3mr
options vfio-pci ids=1000:00a5

And I've also tried adding in /etc/modprobe.d/pve-blacklist.conf

Code:
blacklist mpi3mr

And I've also tried adding in /etc/modprobe.d/pve-blacklist.conf

Code:
softdep mpi3mr pre: vfio-pci

'update-initramfs -u -k all' and reboot after each, but no dice on all of them.

Any help or thoughts would be greatly appreciated! Am I not passing through something right for the Linux kernel when virtualizing?

I like Prox, and I'd probably switch over to it from XCP. But otherwise, I might just have to go to with Unraid or Windows, which would be less than great, but there's only so much head bashing I can do.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!