[SOLVED] PCI passthrough of LSI HBA not working properly

jht3

New Member
Nov 14, 2015
4
0
1
I don't believe I'm the only one having issues, but tracking it down to the specific cause is elluding me or if the dev team is aware of this.

I have an HP Z820 with onboard LSI SAS/SATA controller that I have flashed to the P19 LSI firmware. I'm trying to pass it through to a linux vm, specifically Debian Jessie, but I've also tried Fedora 23 Server. When I do this under Proxmox, following the wiki, the controller passes through but the filesystems on my disks act strangely. I can see the card on Proxmox switch from the mpt2sas driver to vfio, and the card and drives appear in the guest vm. Sometimes the disks mount in the guest vm but existing plaintext files get reported as being binary and full of gibberish. Other times the disks refuse to mount, reporting issues with the superblock and other odd things. I can mount the disks just fine on the hypervisor/Proxmox itself so I know the data is good.

I know this is not a hardware problem as I can pass the controller and disks through perfectly under both Xenserver 6.5 and Esxi 6.0, with no corruption to the data. So there is something amiss with Proxmox 4.0. I'm inclined to stay with Xenserver, but the allure of Proxmox 4.0 with lxc container support and native html5 web management is strong.

EDIT: marking as solved as it is now working? this must be my 3rd or 4th re-install from base Jessie. not sure what allowed my HBA passthrough to work properly this time, but it is working perfectly now.
 
Last edited:
here is a bunch of output from the pve host

# dmesg | grep -e DMAR -e IOMMU
Code:
[    0.000000] ACPI: DMAR 0x00000000DB816FD0 000128 (v01 A M I  OEMDMAR  00000001 INTL 00000001)
[    0.000000] DMAR: IOMMU enabled
[    0.101659] DMAR: Host address width 46
[    0.101662] DMAR: DRHD base: 0x000000fbf20000 flags: 0x0
[    0.101670] DMAR: dmar0: reg_base_addr fbf20000 ver 1:0 cap d2078c106f0466 ecap f020df
[    0.101672] DMAR: DRHD base: 0x000000ef644000 flags: 0x1
[    0.101677] DMAR: dmar1: reg_base_addr ef644000 ver 1:0 cap d2078c106f0466 ecap f020df
[    0.101678] DMAR: RMRR base: 0x000000db6cf000 end: 0x000000db6fdfff
[    0.101680] DMAR: ATSR flags: 0x0
[    0.101682] DMAR-IR: IOAPIC id 3 under DRHD base  0xfbf20000 IOMMU 0
[    0.101684] DMAR-IR: IOAPIC id 0 under DRHD base  0xef644000 IOMMU 1
[    0.101685] DMAR-IR: IOAPIC id 2 under DRHD base  0xef644000 IOMMU 1
[    0.101686] DMAR-IR: HPET id 0 under DRHD base 0xef644000
[    0.101688] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.102615] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    1.162451] DMAR: [Firmware Bug]: RMRR entry for device 08:00.0 is broken - applying workaround
[    1.162844] DMAR: dmar0: Using Queued invalidation
[    1.163065] DMAR: dmar1: Using Queued invalidation
[    1.163212] DMAR: Setting RMRR:
[    1.163233] DMAR: Setting identity map for device 0000:00:1a.0 [0xdb6cf000 - 0xdb6fdfff]
[    1.163257] DMAR: Setting identity map for device 0000:00:1d.0 [0xdb6cf000 - 0xdb6fdfff]
[    1.163276] DMAR: Setting identity map for device 0000:08:00.0 [0xdb6cf000 - 0xdb6fdfff]
[    1.163290] DMAR: Prepare 0-16MiB unity mapping for LPC
[    1.163305] DMAR: Setting identity map for device 0000:00:1f.0 [0x0 - 0xffffff]
[    1.163313] DMAR: Intel(R) Virtualization Technology for Directed I/O

# pveversion -v
Code:
proxmox-ve: 4.0-16 (running kernel: 4.2.2-1-pve)
pve-manager: 4.0-57 (running version: 4.0-57/cc7c2b53)
pve-kernel-4.2.2-1-pve: 4.2.2-16
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-24
qemu-server: 4.0-35
pve-firmware: 1.1-7
libpve-common-perl: 4.0-29
libpve-access-control: 4.0-9
libpve-storage-perl: 4.0-29
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.4-12
pve-container: 1.0-21
pve-firewall: 2.0-13
pve-ha-manager: 1.0-13
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.4-3
lxcfs: 0.10-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve6~jessie

i trimmed the next command to just the pertinent group and device
# find /sys/kernel/iommu_groups/ -type l
Code:
/sys/kernel/iommu_groups/18/devices/0000:02:00.0

# lspci -vnn
Code:
02:00.0 Serial Attached SCSI controller [0107]: LSI Logic / Symbios Logic SAS2308 PCI-Express Fusion-MPT SAS-2 [1000:0087] (rev 05)
	Subsystem: LSI Logic / Symbios Logic Device [1000:3020]
	Flags: bus master, fast devsel, latency 0, IRQ 24
	I/O ports at c000 [size=256]
	Memory at ef240000 (64-bit, non-prefetchable) [size=64K]
	Memory at ef200000 (64-bit, non-prefetchable) [size=256K]
	Expansion ROM at ef100000 [disabled] [size=1M]
	Capabilities: [50] Power Management version 3
	Capabilities: [68] Express Endpoint, MSI 00
	Capabilities: [d0] Vital Product Data
	Capabilities: [a8] MSI: Enable- Count=1/1 Maskable- 64bit+
	Capabilities: [c0] MSI-X: Enable+ Count=16 Masked-
	Capabilities: [100] Advanced Error Reporting
	Capabilities: [1e0] #19
	Capabilities: [1c0] Power Budgeting <?>
	Capabilities: [190] #16
	Capabilities: [148] Alternative Routing-ID Interpretation (ARI)
	Kernel driver in use: mpt2sas

# cat /etc/pve/qemu-server/101.conf
Code:
bootdisk: virtio0
cores: 1
ide2: local:iso/debian-8.2.0-amd64-CD-1.iso,media=cdrom
memory: 8192
name: nas
net0: virtio=AA:8C:0D:EA:B5:DD,bridge=vmbr0
numa: 0
ostype: l26
smbios1: uuid=bbe6c585-d602-49f1-b9e6-a4db2ab37b7a
sockets: 1
virtio0: local:101/vm-101-disk-1.raw,size=8G
machine: q35
hostpci0: 02:00.0,pcie=1

after starting the vm i ran the following:

first on the pve host
# lspci -vnn
Code:
02:00.0 Serial Attached SCSI controller [0107]: LSI Logic / Symbios Logic SAS2308 PCI-Express Fusion-MPT SAS-2 [1000:0087] (rev 05)
	Subsystem: LSI Logic / Symbios Logic Device [1000:3020]
	Flags: bus master, fast devsel, latency 0, IRQ 24
	I/O ports at c000 [size=256]
	Memory at ef240000 (64-bit, non-prefetchable) [size=64K]
	Memory at ef200000 (64-bit, non-prefetchable) [size=256K]
	Expansion ROM at ef100000 [disabled] [size=1M]
	Capabilities: [50] Power Management version 3
	Capabilities: [68] Express Endpoint, MSI 00
	Capabilities: [d0] Vital Product Data
	Capabilities: [a8] MSI: Enable- Count=1/1 Maskable- 64bit+
	Capabilities: [c0] MSI-X: Enable+ Count=16 Masked-
	Capabilities: [100] Advanced Error Reporting
	Capabilities: [1e0] #19
	Capabilities: [1c0] Power Budgeting <?>
	Capabilities: [190] #16
	Capabilities: [148] Alternative Routing-ID Interpretation (ARI)
	Kernel driver in use: vfio-pci

and then on the debian guest

# lspci -vnn
Code:
01:00.0 Serial Attached SCSI controller [0107]: LSI Logic / Symbios Logic SAS2308 PCI-Express Fusion-MPT SAS-2 [1000:0087] (rev 05)
	Subsystem: LSI Logic / Symbios Logic Device [1000:3020]
	Physical Slot: 0
	Flags: bus master, fast devsel, latency 0, IRQ 16
	I/O ports at 7000 [size=256]
	Memory at fe940000 (64-bit, non-prefetchable) [size=64K]
	Memory at fe900000 (64-bit, non-prefetchable) [size=256K]
	Expansion ROM at fe800000 [disabled] [size=1M]
	Capabilities: [50] Power Management version 3
	Capabilities: [68] Express Endpoint, MSI 00
	Capabilities: [d0] Vital Product Data
	Capabilities: [a8] MSI: Enable- Count=1/1 Maskable- 64bit+
	Capabilities: [c0] MSI-X: Enable+ Count=16 Masked-
	Capabilities: [100] Advanced Error Reporting
	Capabilities: [1c0] Power Budgeting <?>
	Capabilities: [190] #16
	Capabilities: [148] Alternative Routing-ID Interpretation (ARI)
	Kernel driver in use: mpt2sas
 
this is all on a fresh install from the official 4.0 CD. fwiw, i am currently not experiencing the problem. i did do an apt-get upgrade using the no-subscription repo.

the previous install was from a minimal Jessie install, with pve installed via repo on top. i prefer this method due to custom partitioning and encryption. but for this latest test i decided to try the installer cd. i wish i had all the same output from before i blew it away.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!