This is an effort from me for the Community to document my quest of getting Freenas to run as a VM under Proxmox with a LSI HBA Adapter forwarded to Freenas so we get the best of both worlds.
My reasoning for this was that I did not want to miss the comfort and stability of either Proxmox or Freenas but refuse to run 2 separate Servers as I have a very well equipped HPE Proliant ML350P G8.
Proxmox documentation showed me that the IOMMU has to be activated on the kernel commandline. The command line parameters are:
for Intel CPUs:
for AMD CPUs:
The kernel command line needs to be placed in the variable GRUB_CMDLINE_LINUX_DEFAULT in the file /etc/default/grub. Running update-grub appends its content to all Linux entries in /boot/grub/grub.cfg.
After changing anything modules related, you need to refresh your initramfs. On Proxmox VE this can be done by executing:
Finish Configuration
Finally reboot to bring the changes into effect and check that it is indeed enabled.
The last 2 lines vfio-pci 0000:03:00.0 and vfio-pci 0000:0d:00.0 are the onboard P420i and the LSI Logic SAS 9207-8i Controller that we would like to be able to pass-through. The message Device is ineligible for IOMMU is actually from the bios of the HPE Server. So it seems like Proxmox is getting ready but the Server is still using it in IPMI / ILO.
A Hint found online mentioned to add a Parameter allowing unsafe Interrupts to the Grub Command line so let’ s try it out.
Changing the Grub file once more
After reboot the “device is ineligible” line has disappeard when checking with
Yet as we tried to pass-through the LSI Controller we’re still confronted with QEMU aborting starting the VM with exitcode 1. It did the same with the P420i as we tried that as well. So I removed the entry of .allow_unsafe_interrupts=1 and rebooted once more
A Search online for the Text “Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor” brought us to this HPE Document ID: c04781229 describing problems occurring when passing through PCIE devices (in the document its GPU’s).
At first I understood just about nothing from the document. But together with the Blog post of Jim Denton it started to make sense. (Thank you Jim for taking the time to post it).
After studying the Blogpost and the HPE Document it became clear to me that PCIE forwarding is not possible for the internal P420i Controller but is very well possible for physical PCIE Slots. As I have an LSI HBA Controller as well this seemed to be promising
From here on it went relatively smooth. Just follow the next Steps.
We need to add the HPE Scripting utilities. So we add the HPE repository to either Enterprise or Community Source list.
or
Add the Line “
Now we add the HPE Publickey’s by executing
And can install the scripting utilitys by
Now we download the inputfile for HPE’s Conrep script. Just to make sure wher everything is I switched to the /home dir.
We’re nearly there now hang on. We need to know of course in which PCIE Slot our Controller is. We use lspci to list all our PCIE devices to a file and nano to scroll through to find it.
In our case we found this. Our LSI Controller with PCIE Decice ID 0000:0d:00.0 as Proxmox knows it is in Slot 4
So we can create an exclude for that one by
Add the following line in that and save it.
Now we’re ready to Rock and Roll, sorry about that. I Mean to run the conrep utility from HP which excludes our PCIE Slot from the IOMMU / RMRR of the ILO/IPMI
And we verify the results by
Now we should see something like this
Time to reboot the Proxmox Server for the last time before we can celebrate.
Adding a PCIE Device to the Freenas VM
That was all the VM is now happily starting with the forwarded Controller
Since then I’ve also used this procedure to forward a Intel Optane 900p PCIE SSD
Best regards
Henk
My reasoning for this was that I did not want to miss the comfort and stability of either Proxmox or Freenas but refuse to run 2 separate Servers as I have a very well equipped HPE Proliant ML350P G8.
Proxmox documentation showed me that the IOMMU has to be activated on the kernel commandline. The command line parameters are:
for Intel CPUs:
intel_iommu=on
for AMD CPUs:
amd_iommu=on
The kernel command line needs to be placed in the variable GRUB_CMDLINE_LINUX_DEFAULT in the file /etc/default/grub. Running update-grub appends its content to all Linux entries in /boot/grub/grub.cfg.
Code:
nano /etc/default/grub
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
# info -f grub -n 'Simple configuration'
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="Proxmox Virtual Environment"
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs"
# 22.5.2020 (HJB) added to enable pcie passtrough.
GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
# Disable os-prober, it might add menu entries for each guest
GRUB_DISABLE_OS_PROBER=true
After changing anything modules related, you need to refresh your initramfs. On Proxmox VE this can be done by executing:
update-initramfs -u -k all
Finish Configuration
Finally reboot to bring the changes into effect and check that it is indeed enabled.
Code:
dmesg | grep -e DMAR -e IOMMU -e AMD-Vi
[ 0.007731] ACPI: DMAR 0x00000000BDDAD200 000558 (v01 HP ProLiant 00000001 \xd2? 0000162E)
¦
¦
[ 111.511173] vfio-pci 0000:03:00.0: DMAR: Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor.
[ 151.942005] vfio-pci 0000:0d:00.0: DMAR: Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor.
root@pve:~#
A Hint found online mentioned to add a Parameter allowing unsafe Interrupts to the Grub Command line so let’ s try it out.
Changing the Grub file once more
Code:
nano /etc/default/grub
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# 22.5.2020 (HJB) Added “.allow_unsafe_interrupts=1” to enable PCIE Passtrough
vfio
vfio_iommu_type1.allow_unsafe_interrupts=1
vfio_pci
vfio_virqfd
After reboot the “device is ineligible” line has disappeard when checking with
Code:
dmesg | grep -e DMAR -e IOMMU -e AMD-Vi
A Search online for the Text “Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor” brought us to this HPE Document ID: c04781229 describing problems occurring when passing through PCIE devices (in the document its GPU’s).
At first I understood just about nothing from the document. But together with the Blog post of Jim Denton it started to make sense. (Thank you Jim for taking the time to post it).
After studying the Blogpost and the HPE Document it became clear to me that PCIE forwarding is not possible for the internal P420i Controller but is very well possible for physical PCIE Slots. As I have an LSI HBA Controller as well this seemed to be promising
From here on it went relatively smooth. Just follow the next Steps.
We need to add the HPE Scripting utilities. So we add the HPE repository to either Enterprise or Community Source list.
nano /etc/apt/sources.list.d/pve-enterprise.list
or
nano /etc/apt/sources.list
Add the Line “
deb https://downloads.linux.hpe.com/SDR/repo/stk/ xenial/current non-free
”Now we add the HPE Publickey’s by executing
curl http://downloads.linux.hpe.com/SDR/hpPublicKey1024.pub | apt-key add -
curl http://downloads.linux.hpe.com/SDR/hpPublicKey2048.pub | apt-key add -
curl http://downloads.linux.hpe.com/SDR/hpPublicKey2048_key1.pub | apt-key add -
curl http://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub | apt-key add -
cd /etc/
apt update
And can install the scripting utilitys by
apt install hp-scripting-tools
Now we download the inputfile for HPE’s Conrep script. Just to make sure wher everything is I switched to the /home dir.
cd /home
wget -O conrep_rmrds.xml [URL]https://downloads.hpe.com/pub/softlib2/software1/pubsw-linux/p1472592088/v95853/conrep_rmrds.xml[/URL]
We’re nearly there now hang on. We need to know of course in which PCIE Slot our Controller is. We use lspci to list all our PCIE devices to a file and nano to scroll through to find it.
lspci -vvv &> pcie.list
nano pcie.list
In our case we found this. Our LSI Controller with PCIE Decice ID 0000:0d:00.0 as Proxmox knows it is in Slot 4
0d:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)
Subsystem: LSI Logic / Symbios Logic 9207-8i SAS2.1 HBA
Physical Slot: 4
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
So we can create an exclude for that one by
cd /home
nano exclude.dat
Add the following line in that and save it.
<Conrep> <Section name="RMRDS_Slot4" helptext=".">Endpoints_Excluded</Section> </Conrep>
Now we’re ready to Rock and Roll, sorry about that. I Mean to run the conrep utility from HP which excludes our PCIE Slot from the IOMMU / RMRR of the ILO/IPMI
conrep -l -x conrep_rmrds.xml -f exclude.dat
And we verify the results by
conrep -s -x conrep_rmrds.xml -f verify.dat
nano verify.dat
Now we should see something like this
<?xml version="1.0" encoding="UTF-8"?>
<!--generated by conrep version 5.5.0.0-->
<Conrep version="5.5.0.0" originating_platform="ProLiant ML350p Gen8" originating_family="P72" originating_romdate="05/24/2019" originating_processor_manufacturer="Intel">
<Section name="RMRDS_Slot1" helptext=".">Endpoints_Included</Section>
<Section name="RMRDS_Slot2" helptext=".">Endpoints_included</Section>
<Section name="RMRDS_Slot3" helptext=".">Endpoints_Included</Section>
<Section name="RMRDS_Slot4" helptext=".">Endpoints_Excluded</Section>
¦
</Conrep>
Time to reboot the Proxmox Server for the last time before we can celebrate.
Adding a PCIE Device to the Freenas VM
That was all the VM is now happily starting with the forwarded Controller
Since then I’ve also used this procedure to forward a Intel Optane 900p PCIE SSD
Best regards
Henk