LSI SAS2008 passthrough issues.

Manichee

New Member
Aug 5, 2023
5
0
1
Originally I had the problem in this thread with my igxbe not loading in kernel 6.2. So I reverted to kernel 6.1.10-1-pve. At this point the 10gbe card started working and I proceeded to install a VM with Xigmanas and enabled passthrough for my LSI SAS2008 card. Everything worked great. Today I tried to update to the latest kernel 6.2.16-6-pve. Rebooted and everything seemed to work fine(loads the ixgbe card now) until it went to start the Xigmanas VM. In the VM console the boot hangs at loading the card as shown in attached screenshot. In the logs for pve i have this repeated infinitely until I stop the VM:
Aug 05 16:49:03 pve QEMU[19027]: kvm: vfio_region_read(0000:01:00.0:region1+0x8, 4) failed: Device or resource busy
Aug 05 16:49:03 pve kernel: vfio-pci 0000:01:00.0: BAR 1: can't reserve [mem 0x804c0000-0x804c3fff 64bit]

When I go back to kernel 6.1.10-1-pve everything works again like it should. I have tried numerous "fixes" listed on the forum and in google searches. Most pertain to sharing the graphics card which I am not trying to do. I use IPMI to watch the server during reboot and it has an onboard Matrox vga output, but it is not plugged in to anything.

Motherboard: supermicro x9srl-f


*EDIT* I do have this in my /etc/kernel/cmdline
Code:
root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on iommu=pt textonly mpt3sas.max_queue_depth=10000
 

Attachments

  • pve-Proxmox-Virtual-Environment.png
    pve-Proxmox-Virtual-Environment.png
    178.8 KB · Views: 14
Last edited:
Aug 05 16:49:03 pve kernel: vfio-pci 0000:01:00.0: BAR 1: can't reserve [mem 0x804c0000-0x804c3fff 64bit]
This error has popped up before with GPU passthrough. Maybe you can try dropping the device from the PCIe bus and re-scanning as described here? Or just make sure nothing (not even the motherboard BIOS) touches the device by early binding to vfio-pci (possibly with a softdep to make sure vfio-pci loads before the actual driver)?
EDIT: Note that Kernel version 6.1 is not getting any updates and is vulnerable (if not now then soon)...
 
Last edited:
When on kernel
Code:
root@pve:~# uname -a
Linux pve 6.2.16-6-pve #1 SMP PREEMPT_DYNAMIC PMX 6.2.16-7 (2023-08-01T11:23Z) x86_64 GNU/Linux

I get the following on bootup:
Code:
root@pve:~# dmesg | grep mpt
[    2.435695] mpt3sas version 43.100.00.00 loaded
[    2.436044] mpt3sas 0000:01:00.0: BAR 1: can't reserve [mem 0x804c0000-0x804c3fff 64bit]
[    2.436053] mpt2sas_cm0: pci_request_selected_regions: failed
[    2.436098] mpt2sas_cm0: failure at drivers/scsi/mpt3sas/mpt3sas_scsih.c:12348/_scsih_probe()!

When on kernel
Code:
root@pve:~# uname -a
Linux pve 6.1.10-1-pve #1 SMP PREEMPT_DYNAMIC PVE 6.1.10-1 (2023-02-07T13:10Z) x86_64 GNU/Linux

On bootup:
Code:
root@pve:~# dmesg | grep mpt
[    2.208393] mpt3sas version 43.100.00.00 loaded
[    2.208743] mpt2sas_cm0: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (131869708 kB)
[    2.263509] mpt2sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k
[    2.263522] mpt2sas_cm0: MSI-X vectors supported: 1
[    2.263528] mpt2sas_cm0:  0 1 1
[    2.263592] mpt2sas_cm0: High IOPs queues : disabled
[    2.263596] mpt2sas0-msix0: PCI-MSI-X enabled: IRQ 70
[    2.263598] mpt2sas_cm0: iomem(0x00000000904c0000), mapped(0x00000000c1160d3d), size(16384)
[    2.263604] mpt2sas_cm0: ioport(0x000000000000e000), size(256)
[    2.316259] mpt2sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k
[    2.344135] mpt2sas_cm0: scatter gather: sge_in_main_msg(1), sge_per_chain(9), sge_per_io(128), chains_per_io(15)
[    2.344223] mpt2sas_cm0: request pool(0x00000000620a5ca5) - dma(0x11df80000): depth(3492), frame_size(128), pool_size(436 kB)
[    2.356742] mpt2sas_cm0: sense pool(0x0000000026e49ef3) - dma(0x11e700000): depth(3367), element_size(96), pool_size (315 kB)
[    2.356750] mpt2sas_cm0: sense pool(0x0000000026e49ef3)- dma(0x11e700000): depth(3367),element_size(96), pool_size(0 kB)
[    2.356836] mpt2sas_cm0: reply pool(0x000000002a35c086) - dma(0x11e780000): depth(3556), frame_size(128), pool_size(444 kB)
[    2.356844] mpt2sas_cm0: config page(0x000000009354ea89) - dma(0x11e690000): size(512)
[    2.356848] mpt2sas_cm0: Allocated physical memory: size(7579 kB)
[    2.356850] mpt2sas_cm0: Current Controller Queue Depth(3364),Max Controller Queue Depth(3432)
[    2.356854] mpt2sas_cm0: Scatter Gather Elements per IO(128)
[    2.401184] mpt2sas_cm0: overriding NVDATA EEDPTagMode setting
[    2.401539] mpt2sas_cm0: LSISAS2008: FWVersion(20.00.07.00), ChipRevision(0x02), BiosVersion(07.27.01.01)
[    2.401545] mpt2sas_cm0: Protocol=(Initiator,Target), Capabilities=(TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ)
[    2.402366] mpt2sas_cm0: sending port enable !!
[    3.935737] mpt2sas_cm0: hba_port entry: 000000003d868574, port: 255 is added to hba_port list
[    3.938588] mpt2sas_cm0: host_add: handle(0x0001), sas_addr(0x500605b0024df810), phys(8)
[    3.940256] mpt2sas_cm0: expander_add: handle(0x0009), parent(0x0001), sas_addr(0x5005076028d1e740), phys(39)
[    3.949147] mpt2sas_cm0: handle(0xa) sas_address(0x5005076028d1e74f) port_type(0x1)
[    3.949366] mpt2sas_cm0: handle(0xb) sas_address(0x5005076028d1e750) port_type(0x1)
[    3.949583] mpt2sas_cm0: handle(0xc) sas_address(0x5005076028d1e751) port_type(0x1)
[    3.949800] mpt2sas_cm0: handle(0xd) sas_address(0x5005076028d1e752) port_type(0x1)
[    3.950017] mpt2sas_cm0: handle(0xe) sas_address(0x5005076028d1e753) port_type(0x1)
[    3.950233] mpt2sas_cm0: handle(0xf) sas_address(0x5005076028d1e754) port_type(0x1)
[    3.950450] mpt2sas_cm0: handle(0x10) sas_address(0x5005076028d1e755) port_type(0x1)
[    3.950666] mpt2sas_cm0: handle(0x11) sas_address(0x5005076028d1e756) port_type(0x1)
[    3.951246] mpt2sas_cm0: handle(0x12) sas_address(0x5005076028d1e757) port_type(0x1)
[    3.951463] mpt2sas_cm0: handle(0x13) sas_address(0x5005076028d1e758) port_type(0x1)
[    3.951679] mpt2sas_cm0: handle(0x14) sas_address(0x5005076028d1e767) port_type(0x1)
[    9.686796] mpt2sas_cm0: port enable: SUCCESS
 
This error has popped up before with GPU passthrough. Maybe you can try dropping the device from the PCIe bus and re-scanning as described here? Or just make sure nothing (not even the motherboard BIOS) touches the device by early binding to vfio-pci (possibly with a softdep to make sure vfio-pci loads before the actual driver)?
I did actually add the mpt2sas and mpt3sas in the pve-blacklist.conf and then ran update-initramfs -u -k all and rebooted. Now it does not show up in dmesg on bootup but still gives the same error. I will try dropping and rescanning now.
 
Code:
root@pve:~# lspci -nn | grep -i sas
01:00.0 Serial Attached SCSI controller [0107]: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] [1000:0072] (rev 02)

So I did:
Code:
root@pve:~# echo 1 > /sys/bus/pci/devices/0000\:01\:00.0/remove
root@pve:~# echo 1 > /sys/bus/pci/rescan
And tried to start the VM and got the same error:
This what i see in the log when I do those commands:

Code:
Aug 05 17:21:19 pve kernel: pci 0000:01:00.0: Removing from iommu group 27
Aug 05 17:21:21 pve kernel: pci 0000:01:00.0: [1000:0072] type 00 class 0x010700
Aug 05 17:21:21 pve kernel: pci 0000:01:00.0: reg 0x10: [io  0xe000-0xe0ff]
Aug 05 17:21:21 pve kernel: pci 0000:01:00.0: reg 0x14: [mem 0x804c0000-0x804c3fff 64bit]
Aug 05 17:21:21 pve kernel: pci 0000:01:00.0: reg 0x1c: [mem 0x80080000-0x800bffff 64bit]
Aug 05 17:21:21 pve kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfbe00000-0xfbe7ffff pref]
Aug 05 17:21:21 pve kernel: pci 0000:01:00.0: supports D1 D2
Aug 05 17:21:21 pve kernel: pci 0000:01:00.0: reg 0x174: [mem 0x804c4000-0x804c7fff 64bit]
Aug 05 17:21:21 pve kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x804c4000-0x80503fff 64bit] (contains BAR0 for 16 VFs)
Aug 05 17:21:21 pve kernel: pci 0000:01:00.0: reg 0x17c: [mem 0x800c0000-0x800fffff 64bit]
Aug 05 17:21:21 pve kernel: pci 0000:01:00.0: VF(n) BAR2 space: [mem 0x800c0000-0x804bffff 64bit] (contains BAR2 for 16 VFs)
Aug 05 17:21:21 pve kernel: pci 0000:01:00.0: Adding to iommu group 27
Aug 05 17:21:21 pve kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x80000000-0x8007ffff pref]
Aug 05 17:21:21 pve kernel: pci 0000:01:00.0: BAR 3: assigned [mem 0x80080000-0x800bffff 64bit]
Aug 05 17:21:21 pve kernel: pci 0000:01:00.0: BAR 9: assigned [mem 0x800c0000-0x804bffff 64bit]
Aug 05 17:21:21 pve kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x804c0000-0x804c3fff 64bit]
Aug 05 17:21:21 pve kernel: pci 0000:01:00.0: BAR 7: assigned [mem 0x804c4000-0x80503fff 64bit]
Aug 05 17:21:21 pve kernel: pci 0000:01:00.0: BAR 0: assigned [io  0xe000-0xe0ff]
Aug 05 17:21:21 pve kernel: mpt3sas 0000:01:00.0: BAR 1: can't reserve [mem 0x804c0000-0x804c3fff 64bit]
Aug 05 17:21:21 pve kernel: mpt2sas_cm2: pci_request_selected_regions: failed
Aug 05 17:21:21 pve kernel: mpt2sas_cm2: failure at drivers/scsi/mpt3sas/mpt3sas_scsih.c:12348/_scsih_probe()!
 
Seem to have it working now after stumbling across this thread. I was pretty sure I had tried pci=realloc=off before, but being quite new to proxmox I may have been using the incorrect boot refresh? I was using proxmox-boot-tool refresh but saw a thread somewhere that mentioned using pve-efiboot-tool refresh. Not sure if that made the difference or not.
 
Seem to have it working now after stumbling across this thread. I was pretty sure I had tried pci=realloc=off before, but being quite new to proxmox I may have been using the incorrect boot refresh?
Glad to see you got it fixed.
I was using proxmox-boot-tool refresh but saw a thread somewhere that mentioned using pve-efiboot-tool refresh. Not sure if that made the difference or not.
pve-efiboot-tool is just the old name of proxmox-boot-tool. Also note that Proxmox can use two different bootloaders. You can always check the active kernel parameters with cat /proc/cmdline.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!