PCIE pass trough in a HPE ML350p to a Freenas VM

hjboven

Member
Apr 18, 2020
1
0
6
57
This is an effort from me for the Community to document my quest of getting Freenas to run as a VM under Proxmox with a LSI HBA Adapter forwarded to Freenas so we get the best of both worlds.

My reasoning for this was that I did not want to miss the comfort and stability of either Proxmox or Freenas but refuse to run 2 separate Servers as I have a very well equipped HPE Proliant ML350P G8.

Proxmox documentation showed me that the IOMMU has to be activated on the kernel commandline. The command line parameters are:
for Intel CPUs: intel_iommu=on
for AMD CPUs: amd_iommu=on

The kernel command line needs to be placed in the variable GRUB_CMDLINE_LINUX_DEFAULT in the file /etc/default/grub. Running update-grub appends its content to all Linux entries in /boot/grub/grub.cfg.

Code:
nano /etc/default/grub

# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
# info -f grub -n 'Simple configuration'
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="Proxmox Virtual Environment"
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs"
# 22.5.2020 (HJB) added to enable pcie passtrough.
GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
# Disable os-prober, it might add menu entries for each guest
GRUB_DISABLE_OS_PROBER=true

After changing anything modules related, you need to refresh your initramfs. On Proxmox VE this can be done by executing:
update-initramfs -u -k all

Finish Configuration
Finally reboot to bring the changes into effect and check that it is indeed enabled.
Code:
dmesg | grep -e DMAR -e IOMMU -e AMD-Vi

[    0.007731] ACPI: DMAR 0x00000000BDDAD200 000558 (v01 HP     ProLiant 00000001 \xd2?   0000162E)
¦
¦
[  111.511173] vfio-pci 0000:03:00.0: DMAR: Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor.
[  151.942005] vfio-pci 0000:0d:00.0: DMAR: Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor.
root@pve:~#
The last 2 lines vfio-pci 0000:03:00.0 and vfio-pci 0000:0d:00.0 are the onboard P420i and the LSI Logic SAS 9207-8i Controller that we would like to be able to pass-through. The message Device is ineligible for IOMMU is actually from the bios of the HPE Server. So it seems like Proxmox is getting ready but the Server is still using it in IPMI / ILO.

A Hint found online mentioned to add a Parameter allowing unsafe Interrupts to the Grub Command line so let’ s try it out.

Changing the Grub file once more
Code:
nano /etc/default/grub

# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# 22.5.2020 (HJB) Added “.allow_unsafe_interrupts=1” to enable PCIE Passtrough
vfio
vfio_iommu_type1.allow_unsafe_interrupts=1
vfio_pci
vfio_virqfd

After reboot the “device is ineligible” line has disappeard when checking with
Code:
dmesg | grep -e DMAR -e IOMMU -e AMD-Vi
Yet as we tried to pass-through the LSI Controller we’re still confronted with QEMU aborting starting the VM with exitcode 1. It did the same with the P420i as we tried that as well. So I removed the entry of .allow_unsafe_interrupts=1 and rebooted once more

A Search online for the Text “Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor” brought us to this HPE Document ID: c04781229 describing problems occurring when passing through PCIE devices (in the document its GPU’s).

At first I understood just about nothing from the document. But together with the Blog post of Jim Denton it started to make sense. (Thank you Jim for taking the time to post it).

After studying the Blogpost and the HPE Document it became clear to me that PCIE forwarding is not possible for the internal P420i Controller but is very well possible for physical PCIE Slots. As I have an LSI HBA Controller as well this seemed to be promising

From here on it went relatively smooth. Just follow the next Steps.

We need to add the HPE Scripting utilities. So we add the HPE repository to either Enterprise or Community Source list.
nano /etc/apt/sources.list.d/pve-enterprise.list
or
nano /etc/apt/sources.list

Add the Line “deb https://downloads.linux.hpe.com/SDR/repo/stk/ xenial/current non-free

Now we add the HPE Publickey’s by executing
curl http://downloads.linux.hpe.com/SDR/hpPublicKey1024.pub | apt-key add - curl http://downloads.linux.hpe.com/SDR/hpPublicKey2048.pub | apt-key add - curl http://downloads.linux.hpe.com/SDR/hpPublicKey2048_key1.pub | apt-key add - curl http://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub | apt-key add - cd /etc/ apt update

And can install the scripting utilitys by
apt install hp-scripting-tools

Now we download the inputfile for HPE’s Conrep script. Just to make sure wher everything is I switched to the /home dir.
cd /home wget -O conrep_rmrds.xml [URL]https://downloads.hpe.com/pub/softlib2/software1/pubsw-linux/p1472592088/v95853/conrep_rmrds.xml[/URL]

We’re nearly there now hang on. We need to know of course in which PCIE Slot our Controller is. We use lspci to list all our PCIE devices to a file and nano to scroll through to find it.
lspci -vvv &> pcie.list nano pcie.list

In our case we found this. Our LSI Controller with PCIE Decice ID 0000:0d:00.0 as Proxmox knows it is in Slot 4

0d:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05) Subsystem: LSI Logic / Symbios Logic 9207-8i SAS2.1 HBA Physical Slot: 4 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

So we can create an exclude for that one by
cd /home nano exclude.dat

Add the following line in that and save it.
<Conrep> <Section name="RMRDS_Slot4" helptext=".">Endpoints_Excluded</Section> </Conrep>

Now we’re ready to Rock and Roll, sorry about that. I Mean to run the conrep utility from HP which excludes our PCIE Slot from the IOMMU / RMRR of the ILO/IPMI
conrep -l -x conrep_rmrds.xml -f exclude.dat

And we verify the results by
conrep -s -x conrep_rmrds.xml -f verify.dat nano verify.dat

Now we should see something like this
<?xml version="1.0" encoding="UTF-8"?> <!--generated by conrep version 5.5.0.0--> <Conrep version="5.5.0.0" originating_platform="ProLiant ML350p Gen8" originating_family="P72" originating_romdate="05/24/2019" originating_processor_manufacturer="Intel"> <Section name="RMRDS_Slot1" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot2" helptext=".">Endpoints_included</Section> <Section name="RMRDS_Slot3" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot4" helptext=".">Endpoints_Excluded</Section> ¦ </Conrep>


Time to reboot the Proxmox Server for the last time before we can celebrate.

Adding a PCIE Device to the Freenas VM
1590233836049.png
1590233844469.png
That was all the VM is now happily starting with the forwarded Controller
Since then I’ve also used this procedure to forward a Intel Optane 900p PCIE SSD

Best regards
Henk
 
thank you for the excellent summary!

I have been on this for a while now, on ESXi i can passthrough anything including the onboard devices on my ml310e, ml350p and micro gen8. I wonder if there is some way to use the HP tool to change the onboard devices.
 
thank you for the excellent summary!

I have been on this for a while now, on ESXi i can passthrough anything including the onboard devices on my ml310e, ml350p and micro gen8. I wonder if there is some way to use the HP tool to change the onboard devices.

Have you been able to get passthrough working on MicroServer Gen8?
 
This is an effort from me for the Community to document my quest of getting Freenas to run as a VM under Proxmox with a LSI HBA Adapter forwarded to Freenas so we get the best of both worlds.

My reasoning for this was that I did not want to miss the comfort and stability of either Proxmox or Freenas but refuse to run 2 separate Servers as I have a very well equipped HPE Proliant ML350P G8.

Proxmox documentation showed me that the IOMMU has to be activated on the kernel commandline. The command line parameters are:
for Intel CPUs: intel_iommu=on
for AMD CPUs: amd_iommu=on

The kernel command line needs to be placed in the variable GRUB_CMDLINE_LINUX_DEFAULT in the file /etc/default/grub. Running update-grub appends its content to all Linux entries in /boot/grub/grub.cfg.

Code:
nano /etc/default/grub

# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
# info -f grub -n 'Simple configuration'
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="Proxmox Virtual Environment"
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs"
# 22.5.2020 (HJB) added to enable pcie passtrough.
GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
# Disable os-prober, it might add menu entries for each guest
GRUB_DISABLE_OS_PROBER=true

After changing anything modules related, you need to refresh your initramfs. On Proxmox VE this can be done by executing:
update-initramfs -u -k all

Finish Configuration
Finally reboot to bring the changes into effect and check that it is indeed enabled.
Code:
dmesg | grep -e DMAR -e IOMMU -e AMD-Vi

[    0.007731] ACPI: DMAR 0x00000000BDDAD200 000558 (v01 HP     ProLiant 00000001 \xd2?   0000162E)
¦
¦
[  111.511173] vfio-pci 0000:03:00.0: DMAR: Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor.
[  151.942005] vfio-pci 0000:0d:00.0: DMAR: Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor.
root@pve:~#
The last 2 lines vfio-pci 0000:03:00.0 and vfio-pci 0000:0d:00.0 are the onboard P420i and the LSI Logic SAS 9207-8i Controller that we would like to be able to pass-through. The message Device is ineligible for IOMMU is actually from the bios of the HPE Server. So it seems like Proxmox is getting ready but the Server is still using it in IPMI / ILO.

A Hint found online mentioned to add a Parameter allowing unsafe Interrupts to the Grub Command line so let’ s try it out.

Changing the Grub file once more
Code:
nano /etc/default/grub

# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# 22.5.2020 (HJB) Added “.allow_unsafe_interrupts=1” to enable PCIE Passtrough
vfio
vfio_iommu_type1.allow_unsafe_interrupts=1
vfio_pci
vfio_virqfd

After reboot the “device is ineligible” line has disappeard when checking with
Code:
dmesg | grep -e DMAR -e IOMMU -e AMD-Vi
Yet as we tried to pass-through the LSI Controller we’re still confronted with QEMU aborting starting the VM with exitcode 1. It did the same with the P420i as we tried that as well. So I removed the entry of .allow_unsafe_interrupts=1 and rebooted once more

A Search online for the Text “Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor” brought us to this HPE Document ID: c04781229 describing problems occurring when passing through PCIE devices (in the document its GPU’s).

At first I understood just about nothing from the document. But together with the Blog post of Jim Denton it started to make sense. (Thank you Jim for taking the time to post it).

After studying the Blogpost and the HPE Document it became clear to me that PCIE forwarding is not possible for the internal P420i Controller but is very well possible for physical PCIE Slots. As I have an LSI HBA Controller as well this seemed to be promising

From here on it went relatively smooth. Just follow the next Steps.

We need to add the HPE Scripting utilities. So we add the HPE repository to either Enterprise or Community Source list.
nano /etc/apt/sources.list.d/pve-enterprise.list
or
nano /etc/apt/sources.list

Add the Line “deb https://downloads.linux.hpe.com/SDR/repo/stk/ xenial/current non-free

Now we add the HPE Publickey’s by executing
curl http://downloads.linux.hpe.com/SDR/hpPublicKey1024.pub | apt-key add - curl http://downloads.linux.hpe.com/SDR/hpPublicKey2048.pub | apt-key add - curl http://downloads.linux.hpe.com/SDR/hpPublicKey2048_key1.pub | apt-key add - curl http://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub | apt-key add - cd /etc/ apt update

And can install the scripting utilitys by
apt install hp-scripting-tools

Now we download the inputfile for HPE’s Conrep script. Just to make sure wher everything is I switched to the /home dir.
cd /home wget -O conrep_rmrds.xml [URL]https://downloads.hpe.com/pub/softlib2/software1/pubsw-linux/p1472592088/v95853/conrep_rmrds.xml[/URL]

We’re nearly there now hang on. We need to know of course in which PCIE Slot our Controller is. We use lspci to list all our PCIE devices to a file and nano to scroll through to find it.
lspci -vvv &> pcie.list nano pcie.list

In our case we found this. Our LSI Controller with PCIE Decice ID 0000:0d:00.0 as Proxmox knows it is in Slot 4

0d:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05) Subsystem: LSI Logic / Symbios Logic 9207-8i SAS2.1 HBA Physical Slot: 4 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

So we can create an exclude for that one by
cd /home nano exclude.dat

Add the following line in that and save it.
<Conrep> <Section name="RMRDS_Slot4" helptext=".">Endpoints_Excluded</Section> </Conrep>

Now we’re ready to Rock and Roll, sorry about that. I Mean to run the conrep utility from HP which excludes our PCIE Slot from the IOMMU / RMRR of the ILO/IPMI
conrep -l -x conrep_rmrds.xml -f exclude.dat

And we verify the results by
conrep -s -x conrep_rmrds.xml -f verify.dat nano verify.dat

Now we should see something like this
<?xml version="1.0" encoding="UTF-8"?> <!--generated by conrep version 5.5.0.0--> <Conrep version="5.5.0.0" originating_platform="ProLiant ML350p Gen8" originating_family="P72" originating_romdate="05/24/2019" originating_processor_manufacturer="Intel"> <Section name="RMRDS_Slot1" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot2" helptext=".">Endpoints_included</Section> <Section name="RMRDS_Slot3" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot4" helptext=".">Endpoints_Excluded</Section> ¦ </Conrep>


Time to reboot the Proxmox Server for the last time before we can celebrate.

Adding a PCIE Device to the Freenas VM
View attachment 17350
View attachment 17351
That was all the VM is now happily starting with the forwarded Controller
Since then I’ve also used this procedure to forward a Intel Optane 900p PCIE SSD

Best regards
Henk

I tried this on MicroServer Gen8 without success. Do you have a MicroServer or know this to work on them? Do you know if a particular/older kernel would make a difference?
 
Thank you very much for your notes here. I am working on using spdk with proxmox and ceph and this helped me enable vfio on my Sabrent RocketQ NVME disk attached to a HP dl160 gen8. I have a few more HP servers where these notes will apply to so I will update later if anything changes but here is what I took from my journey.

Error Received from dmesg: vfio-pci 0000:03:00.0: DMAR: Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor.

Bash:
echo "deb http://downloads.linux.hpe.com/SDR/repo/stk buster/current non-free" >> nano /etc/apt/sources.list
apt update -y && apt install -y hp-scripting-tools
# dump conrep config
conrep -s
# get slot of nvme card
lspci -vvv | grep "Physical Slot" -B10
# exclude slot from rmrd/rmrr/iommu exclude
echo "<Conrep> <Section name=\"RMRDS_Slot2\" helptext=\".\">Endpoints_Excluded</Section> </Conrep>" >> exclude.dat
# load config and check
conrep -l -x conrep_rmrds.xml -f exclude.dat
conrep -s -x conrep_rmrds.xml -f verify.dat
# you need add iommu to grub, update, and reboot
nano /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on intremap=no_x2apic_optout iommu=pt"
update-grub
reboot
# you can now use readlink to check iommu groups of your device
readlink "/sys/bus/pci/devices/0000:04:00.0/iommu_group"
# spdk install git, clone recursively and install deps
#apt install libibverbs-dev librdmacm-dev -y
apt install -y git
git clone https://github.com/spdk/spdk
cd spdk
git submodule update --init
scripts/pkgdep.sh
# can add optional configurations here like --with-rdma
./configure
# make with processor count
make -j $(grep -c ^processor /proc/cpuinfo)
# identify devices before setup
scripts/setup.sh status
export HUGEMEM=8192 && scripts/setup.sh
# you can run this to test/identify devices after setup
build/examples/identify

root@frank:~/spdk# build/examples/identify
[2020-11-16 15:02:40.183244] Starting SPDK v21.01-pre git sha1 a2be54926 / DPDK 20.08.0 initialization...
[2020-11-16 15:02:40.183464] [ DPDK EAL parameters: [2020-11-16 15:02:40.183512] identify [2020-11-16 15:02:40.183558] --no-shconf [2020-11-16 15:02:40.183592] -c 0x1 [2020-11-16 15:02:40.183625] -n 1 [2020-11-16 15:02:40.183655] -m 0 [2020-11-16 15:02:40.183711] --log-level=lib.eal:6 [2020-11-16 15:02:40.183750] --log-level=lib.cryptodev:5 [2020-11-16 15:02:40.183781] --log-level=user1:6 [2020-11-16 15:02:40.183811] --base-virtaddr=0x200000000000 [2020-11-16 15:02:40.183844] --match-allocations [2020-11-16 15:02:40.183875] --file-prefix=spdk_pid58683 [2020-11-16 15:02:40.183906] ]
EAL: No available hugepages reported in hugepages-1048576kB
EAL: No legacy callbacks, legacy socket not created
=====================================================
NVMe Controller at 0000:04:00.0 [1987:5012]
=====================================================
Controller Capabilities/Features
================================
Vendor ID: 1987
Subsystem Vendor ID: 1987
Serial Number: B968070106B500676753
Model Number: Sabrent Rocket Q
Firmware Version: RKT30Q.1
Recommended Arb Burst: 1
IEEE OUI Identifier: a7 79 64
Multi-path I/O
May have multiple subsystem ports: No
May be connected to multiple hosts: No
Associated with SR-IOV VF: No
Max Data Transfer Size: 2097152
Max Number of Namespaces: 1
Error Recovery Timeout: 300000 milliseconds
NVMe Specification Version (VS): 1.3
NVMe Specification Version (Identify): 1.3
Maximum Queue Entries: 16384
Contiguous Queues Required: Yes
Arbitration Mechanisms Supported
Weighted Round Robin: Not Supported
Vendor Specific: Not Supported
Reset Timeout: 30000 ms
Doorbell Stride: 4 bytes
NVM Subsystem Reset: Not Supported
Command Sets Supported
NVM Command Set: Supported
Boot Partition: Not Supported
Memory Page Size Minimum: 4096 bytes
Memory Page Size Maximum: 4096 bytes
Optional Asynchronous Events Supported
Namespace Attribute Notices: Not Supported
Firmware Activation Notices: Supported
128-bit Host Identifier: Not Supported

Controller Memory Buffer Support
================================
Supported: No

Admin Command Set Attributes
============================
Security Send/Receive: Supported
Format NVM: Supported
Firmware Activate/Download: Supported
Namespace Management: Not Supported
Device Self-Test: Supported
Directives: Not Supported
NVMe-MI: Not Supported
Virtualization Management: Not Supported
Doorbell Buffer Config: Not Supported
Abort Command Limit: 4
Async Event Request Limit: 4
Number of Firmware Slots: 1
Firmware Slot 1 Read-Only: No
Firmware Update Granularity: 4 KiB
Per-Namespace SMART Log: No
Asymmetric Namespace Access Log Page: Not Supported
Command Effects Log Page: Not Supported
Get Log Page Extended Data: Not Supported
Telemetry Log Pages: Supported
Error Log Page Entries Supported: 63
Keep Alive: Not Supported

NVM Command Set Attributes
==========================
Submission Queue Entry Size
Max: 64
Min: 64
Completion Queue Entry Size
Max: 16
Min: 16
Number of Namespaces: 1
Compare Command: Supported
Write Uncorrectable Command: Not Supported
Dataset Management Command: Supported
Write Zeroes Command: Supported
Set Features Save Field: Supported
Reservations: Not Supported
Timestamp: Supported
Volatile Write Cache: Present
Atomic Write Unit (Normal): 256
Atomic Write Unit (PFail): 1
Atomic Compare & Write Unit: 1
Fused Compare & Write: Not Supported
Scatter-Gather List
SGL Command Set: Not Supported
SGL Keyed: Not Supported
SGL Bit Bucket Descriptor: Not Supported
SGL Metadata Pointer: Not Supported
Oversized SGL: Not Supported
SGL Metadata Address: Not Supported
SGL Offset: Not Supported
Transport SGL Data Block: Not Supported
Replay Protected Memory Block: Not Supported

Firmware Slot Information
=========================
Active slot: 1
Slot 1 Firmware Revision: RKT30Q.1


Error Log
=========

Arbitration
===========
Arbitration Burst: 1

Power Management
================
Number of Power States: 5
Current Power State: Power State #0
Power State #0: Max Power: 5.55 W
Power State #1: Max Power: 4.49 W
Power State #2: Max Power: 3.97 W
Power State #3: Max Power: 0.0490 W
Power State #4: Max Power: 0.0018 W
Non-Operational Permissive Mode: Supported

Health Information
==================
Critical Warnings:
Available Spare Space: OK
Temperature: OK
Device Reliability: OK
Read Only: No
Volatile Memory Backup: OK
Current Temperature: 300 Kelvin (27 Celsius)
Temperature Threshold: 348 Kelvin (75 Celsius)
Available Spare: 100%
Available Spare Threshold: 5%
Life Percentage Used: 2%
Data Units Read: 3302878
Data Units Written: 5126015
Host Read Commands: 54949200
Host Write Commands: 71005176
Controller Busy Time: 504 minutes
Power Cycles: 391
Power On Hours: 598 hours
Unsafe Shutdowns: 184
Unrecoverable Media Errors: 0
Lifetime Error Log Entries: 0
Warning Temperature Time: 0 minutes
Critical Temperature Time: 0 minutes

Number of Queues
================
Number of I/O Submission Queues: 8
Number of I/O Completion Queues: 8

Host Controlled Thermal Management
==================================
Minimum Thermal Management Temperature: 313 Kelvin (40 Celsius)
Maximum Thermal Managment Temperature: 343 Kelvin (70 Celsius)

Active Namespaces
=================
Namespace ID:1
Deallocate: Supported
Deallocated/Unwritten Error: Not Supported
Deallocated Read Value: All 0x00
Deallocate in Write Zeroes: Supported
Deallocated Guard Field: 0xFFFF
Flush: Supported
Reservation: Not Supported
Namespace Sharing Capabilities: Private
Size (in LBAs): 1953525168 (931GiB)
Capacity (in LBAs): 1953525168 (931GiB)
Utilization (in LBAs): 1953525168 (931GiB)
EUI64: 6479A72DB0122BD2
Thin Provisioning: Not Supported
Per-NS Atomic Units: No
NGUID/EUI64 Never Reused: No
Number of LBA Formats: 2
Current LBA Format: LBA Format #00
LBA Format #00: Data Size: 512 Metadata Size: 0
LBA Format #01: Data Size: 4096 Metadata Size: 0
 
Having followed this guide i can still not get Open Media Vault to start with LSI SAS card enabled. HP ML310e G8
Have i missed something? Anything extra i might need to check?
Been trying to get this working for the last two days with no luck. May have to go back to OMV bare metal.

Thanks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!