Hi,
I've recently upgraded from a 8300-8i to a 8600-24i, running proxmox 9.1.1, kernel 6.17.2-1-pve.
First the good things: The card works without any issues and fully supports ASPM. I am able to reach even C10 without any issues (since this is a topic a lot of folks seem to be interested in).
For experimental purposes, I installed TrueNas Scale in a VM and passed the HBA through for usage in the VM. This also works without any issues, as truenas correctly sees all drives attached to the HBA and correctly imports my zpool.
However, I've read a lot about doing this and there seems to be *a lot* involved (or at least there used to be), isolating iommu grups, modfiying grub command lines etc.
I had to do none of that, and are wondering if I missed something and built myself a ticking time bomb.
Here is my vm conf:
The two important lines:
Which reflect the HBA and some generic asmedia SATA controller having another pool with 3 SSDs attached (which also loads correctly in truenas).
What I did in addition, was blacklist the mpt3sas driver:
However, I was under the impression that this would mean that proxmox doesnt even initialize the HBA on the host and passes it directly to the VM. While lsblk does not report any of the disks attached to either the HBA or the Asmedia controller, lspci fully lists both of them (for brevity, I only show the HBA here):
Is this a "safe" setup? Or did I miss something here?
I've recently upgraded from a 8300-8i to a 8600-24i, running proxmox 9.1.1, kernel 6.17.2-1-pve.
First the good things: The card works without any issues and fully supports ASPM. I am able to reach even C10 without any issues (since this is a topic a lot of folks seem to be interested in).
For experimental purposes, I installed TrueNas Scale in a VM and passed the HBA through for usage in the VM. This also works without any issues, as truenas correctly sees all drives attached to the HBA and correctly imports my zpool.
However, I've read a lot about doing this and there seems to be *a lot* involved (or at least there used to be), isolating iommu grups, modfiying grub command lines etc.
I had to do none of that, and are wondering if I missed something and built myself a ticking time bomb.
Here is my vm conf:
Code:
root@proxmox:~# cat /etc/pve/qemu-server/101.conf
boot: order=scsi0;net0
cores: 8
cpu: host
hostpci0: 0000:05:00,pcie=1
hostpci1: 0000:02:00,pcie=1
machine: q35
memory: 32768
meta: creation-qemu=10.1.2,ctime=1766411675
name: truenas
net0: virtio=BC:24:11:7E:74:07,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: Application-Storage-001:vm-101-disk-0,iothread=1,size=20G
scsihw: virtio-scsi-single
smbios1: uuid=7dbb783a-bc1a-4928-bdfe-2e09cb612bce
sockets: 1
startup: order=0
vmgenid: d1cfccdc-d312-49e0-8d91-ac8cca5052f9
The two important lines:
Code:
hostpci0: 0000:05:00,pcie=1
hostpci1: 0000:02:00,pcie=1
Which reflect the HBA and some generic asmedia SATA controller having another pool with 3 SSDs attached (which also loads correctly in truenas).
What I did in addition, was blacklist the mpt3sas driver:
Code:
root@proxmox:~# cat /etc/modprobe.d/blacklist.conf
blacklist mpi3mr
However, I was under the impression that this would mean that proxmox doesnt even initialize the HBA on the host and passes it directly to the VM. While lsblk does not report any of the disks attached to either the HBA or the Asmedia controller, lspci fully lists both of them (for brevity, I only show the HBA here):
Code:
05:00.0 RAID bus controller: Broadcom / LSI Fusion-MPT 24GSAS/PCIe SAS40xx/41xx (rev 01)
Subsystem: Broadcom / LSI eHBA 9600-24i Tri-Mode Storage Adapter
Flags: bus master, fast devsel, latency 0, IOMMU group 17
Memory at 60e1400000 (64-bit, prefetchable) [size=16K]
Expansion ROM at 81c00000 [disabled] [size=512K]
Capabilities: [40] Power Management version 3
Capabilities: [48] MSI: Enable- Count=1/32 Maskable+ 64bit+
Capabilities: [68] Express Endpoint, IntMsgNum 0
Capabilities: [a4] MSI-X: Enable+ Count=128 Masked-
Capabilities: [b0] Vital Product Data
Capabilities: [100] Device Serial Number 00-80-5e-9a-fc-41-e5-18
Capabilities: [fb4] Advanced Error Reporting
Capabilities: [138] Power Budgeting <?>
Capabilities: [db4] Secondary PCI Express
Capabilities: [af4] Data Link Feature <?>
Capabilities: [d00] Physical Layer 16.0 GT/s <?>
Capabilities: [d40] Lane Margining at the Receiver
Capabilities: [160] Dynamic Power Allocation <?>
Kernel driver in use: vfio-pci
Kernel modules: mpi3mr
Is this a "safe" setup? Or did I miss something here?