[SOLVED] PCIe passthrough on 4.2 acces denied? Its a HP Problem with RMRR

mannebk

Renowned Member
May 23, 2016
29
0
66
54
Hi Folks,

i run a HP Proliant Micro Gen8 Server with 16 GB Ram and the Xeon E3-1280V2 CPU.

I want to pass through a HP P420 Smart Array Controler to my OMV VM.

I am up to date with kernel:
Code:
pve-manager/4.2-5/7cf09667 (running kernel: 4.4.8-1-pve)

I did follow the proxmox guid.

https://pve.proxmox.com/wiki/Pci_passthrough

This is the result:

Code:
Running as unit 200.scope.
kvm: -device vfio-pci,host=07:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio: failed to set iommu for container: Operation not permitted
kvm: -device vfio-pci,host=07:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio: failed to setup container for group 11
kvm: -device vfio-pci,host=07:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio: failed to get group 11
kvm: -device vfio-pci,host=07:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: Device initialization failed
TASK ERROR: start failed: command '/usr/bin/systemd-run --scope --slice qemu --unit 200 --description \''Proxmox VE VM 200'\' -p 'KillMode=none' -p 'CPUShares=1000' /usr/bin/kvm -id 200 -chardev 'socket,id=qmp,path=/var/run/qemu-server/200.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/200.pid -daemonize -smbios 'type=1,uuid=109f454e-d1e3-4404-b756-befa4b924033' -name VM-OMV-2-1 -smp '4,sockets=1,cores=4,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000' -vga cirrus -vnc unix:/var/run/qemu-server/200.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 4096 -k de -readconfig /usr/share/qemu-server/pve-q35.cfg -device 'usb-tablet,id=tablet,bus=ehci.0,port=1' -device 'vfio-pci,host=07:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:afe775d9a98c' -drive 'file=/dev/pve/vm-200-disk-1,if=none,id=drive-ide0,format=raw,cache=none,aio=native,detect-zeroes=on' -device 'ide-hd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=100' -drive 'file=/mnt/pve/System-PRoxmox/template/iso/openmediavault_2.1_amd64.iso,if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -netdev 'type=tap,id=net0,ifname=tap200i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown' -device 'e1000,mac=62:35:64:63:39:32,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' -machine 'type=q35'' failed: exit code 1

Here is the 200.conf
Code:
bootdisk: ide0
cores: 4
hostpci0: 07:00.0,pcie=1
ide0: file=local-lvm:vm-200-disk-1,size=15G
ide2: file=System-PRoxmox:iso/openmediavault_2.1_amd64.iso,media=cdrom
machine: q35
memory: 4096
name: VM-OMV-2-1
net0: e1000=62:35:64:63:39:32,bridge=vmbr0
numa: 0
ostype: l26
smbios1: uuid=109f454e-d1e3-4404-b756-befa4b924033
sockets: 1

suggestions?

Thanks
Manne
 
i did get immou group seperation by adding
Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on pcie_acs_override=downstream"

w/o the pcie_acs_override=downsteram i had the HP Raid Controler in the same group as the PCIe Root port. it was in 00:01.0 while the right now 00:01.0 was 00:01.1

now it looks like:
Code:
root@ProLiant-Gen8-pve:~# lspci
00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v2/Ivy Bridge DRAM Controller (rev 09)
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port (rev 09)
00:06.0 PCI bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port (rev 09)
00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05)
00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5)
00:1c.4 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 (rev b5)
00:1c.6 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 7 (rev b5)
00:1c.7 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 8 (rev b5)
00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a5)
00:1f.0 ISA bridge: Intel Corporation C204 Chipset Family LPC Controller (rev 05)
00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family SATA AHCI Controller (rev 05)
01:00.0 System peripheral: Hewlett-Packard Company Integrated Lights-Out Standard Slave Instrumentation & System Support (rev 05)
01:00.1 VGA compatible controller: Matrox Electronics Systems Ltd. MGA G200EH
01:00.2 System peripheral: Hewlett-Packard Company Integrated Lights-Out Standard Management Processor Support and Messaging (rev 05)
01:00.4 USB controller: Hewlett-Packard Company Integrated Lights-Out Standard Virtual USB Controller (rev 02)
03:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet PCIe
03:00.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet PCIe
04:00.0 USB controller: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller (rev 03)
07:00.0 RAID bus controller: Hewlett-Packard Company Smart Array Gen8 Controllers (rev 01)

root@ProLiant-Gen8-pve:~#

root@ProLiant-Gen8-pve:~#  find /sys/kernel/iommu_groups/ -type l
/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
/sys/kernel/iommu_groups/2/devices/0000:00:06.0
/sys/kernel/iommu_groups/3/devices/0000:00:1a.0
/sys/kernel/iommu_groups/4/devices/0000:00:1c.0
/sys/kernel/iommu_groups/5/devices/0000:00:1c.4
/sys/kernel/iommu_groups/6/devices/0000:00:1c.6
/sys/kernel/iommu_groups/7/devices/0000:00:1c.7
/sys/kernel/iommu_groups/8/devices/0000:00:1d.0
/sys/kernel/iommu_groups/9/devices/0000:00:1e.0
/sys/kernel/iommu_groups/10/devices/0000:00:1f.0
/sys/kernel/iommu_groups/10/devices/0000:00:1f.2
/sys/kernel/iommu_groups/11/devices/0000:07:00.0
/sys/kernel/iommu_groups/12/devices/0000:03:00.0
/sys/kernel/iommu_groups/12/devices/0000:03:00.1
/sys/kernel/iommu_groups/13/devices/0000:04:00.0
/sys/kernel/iommu_groups/14/devices/0000:01:00.0
/sys/kernel/iommu_groups/14/devices/0000:01:00.1
/sys/kernel/iommu_groups/14/devices/0000:01:00.2
/sys/kernel/iommu_groups/14/devices/0000:01:00.4
root@ProLiant-Gen8-pve:~#
 
i also did try

Code:
hostpci0: 07:00.0,pcie=1,driver=vfio

nothing changed

everytime i start up the host, everything is fine.

after i tryed to start up the VM the fans went into overdrive. but it takes about 30 seconds after the startup fails.

repeated startups from VM does not change anything
 
Just for test I changened
Code:
hostpci0: 07:00.0,pcie=1
to
Code:
hostpci0: 03:00,pcie=1
if you look in the first post, its both of my NICs.

I then started the VM and my websession to the PVE GUI went dead instantly. So i switched to the server itself and run qm for the 200 vmid, but the vm was dead.

While shuting down for reboot form the servers console, i had to pull the plug, as the watchdog failed to stop, and then the server was just not responding to anything but pulling the plug.

I think the BIOS is sizeing the Raid Controler somehow, and wont release it to passthrough to the VM.

The controler itsself is set to "no boot"

I have reade some cento os posts that you need to unbind and then rebind the controler somehere else...
 
and this is the verbose output of

lspci -vvv for the RAID controler. In know that passthrough works for the HP420 with the HP version of VMware ESXi.

Code:
07:00.0 RAID bus controller: Hewlett-Packard Company Smart Array Gen8 Controllers (rev 01)
        Subsystem: Hewlett-Packard Company P420
        Physical Slot: 1
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR- FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 64 bytes
        Interrupt: pin A routed to IRQ 16
        Region 0: Memory at fbf00000 (64-bit, non-prefetchable) [size=1M]
        Region 2: Memory at fbef0000 (64-bit, non-prefetchable) [size=1K]
        Region 4: I/O ports at 4000 [size=256]
        [virtual] Expansion ROM at fbe00000 [disabled] [size=512K]
        Capabilities: [80] Power Management version 3
                Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1+,D2-,D3hot+,D3cold-)
                Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
        Capabilities: [90] MSI: Enable- Count=1/32 Maskable- 64bit+
                Address: 0000000000000000  Data: 0000
        Capabilities: [b0] MSI-X: Enable+ Count=64 Masked-
                Vector table: BAR=0 offset=00002000
                PBA: BAR=0 offset=00003000
        Capabilities: [c0] Express (v2) Endpoint, MSI 00
                DevCap: MaxPayload 512 bytes, PhantFunc 0, Latency L0s <4us, L1 <1us
                        ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
                DevCtl: Report errors: Correctable- Non-Fatal+ Fatal+ Unsupported-
                        RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
                        MaxPayload 256 bytes, MaxReadReq 4096 bytes
                DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr- TransPend-
                LnkCap: Port #0, Speed 8GT/s, Width x8, ASPM not supported, Exit Latency L0s unlimited, L1 <64us
                        ClockPM- Surprise- LLActRep- BwNot-
                LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk-
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 8GT/s, Width x8, TrErr- Train- SlotClk- DLActive- BWMgmt- ABWMgmt-
                DevCap2: Completion Timeout: Range B, TimeoutDis+, LTR+, OBFF Via message
                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
                LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                         Compliance De-emphasis: -6dB
                LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+, EqualizationPhase1+
                         EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
        Capabilities: [100 v2] Advanced Error Reporting
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
                UESvrt: DLP- SDES+ TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
                CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
                AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-
        Capabilities: [300 v1] #19
        Kernel driver in use: hpsa
 
i made a litte progress....

my fans start to go ape when i issue the comand

Code:
echo 0000:07:00.0 > /sys/bus/pci/drivers/pci-stub/bind

I did follow this post:

http://www.linux-kvm.org/page/How_to_assign_devices_with_VT-d_in_KVM

and first did load pci_stub with the command
Code:
 modprobe pci_stub

I did this because I did find out, that there was no directory:
Code:
 /sys/bus/pci/drivers/pci-stub/

After that, i could just do:

Code:
root@ProLiant-Gen8-pve:/# echo "103c 323b" > /sys/bus/pci/drivers/pci-stub/new_id
root@ProLiant-Gen8-pve:/# echo 0000:07:00.0 > /sys/bus/pci/devices/0000:07:00.0/driver/unbind
root@ProLiant-Gen8-pve:/# echo 0000:07:00.0 > /sys/bus/pci/drivers/pci-stub/bind

right then my fans went abe, so I am quite shure thats how far the startup of the VM goes.

After that I just throw out the
Code:
hostpci0: 07:00.0,pcie=1
stuff from <VMID>.conf and booted my VM.

Now im runing the VM for the .iso setup of OMV. later on i try and add the controler via hotswap from the VM CLI

Well, install went fine, but then i tried to add the pci via qm command

Code:
root@ProLiant-Gen8-pve:/# qm monitor 200
Entering Qemu Monitor for VM 200 - type 'help' for help
qm> device_add pci-assign,host=07:00.0,id=test
Bus 'pcie.0' does not support hotplugging
qm> q
root@ProLiant-Gen8-pve:/#

im runing out of options... :-(
 
Last edited:
now i did try blacklisting of the kernel driver of the RAID modul (hpsa) and did acording to this
https://forum.proxmox.com/threads/cant-blacklist-mpt2sas-module.26950/
the
# update-initramfs -k all -u
after updateing the blacklist.conf by adding "blacklist hpsa"

no sucess

then i added

Code:
echo "options vfio-pci ids=10de:1381,10de:0fbc" > /etc/modprobe.d/vfio.conf
from the proxmox wiki for passthrough

then the fans turned ape when the machine booted, but booting the vm was to no sucess.

I allways end up with

kvm: -device vfio-pci,host=07:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio: failed to set iommu for container: Operation not permitted

I event tryed to disable the P420 in the bios of the server, well it was gone... no sucess too.




Is there any way to get to the logs *tada* i just run dmesg and there is one thing:

Code:
[   55.477885] vfio-pci 0000:07:00.0: Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor.

now its time to hit the deck. tomorrow is an other day.
 
for your info, dmesg only gave this away after i did all the modifications i mentiond above, ive been on this for about 3 days now.

and now: f*cking s*ite ive found it!

Its a F*CKING HP problem.

Its related to a conflict of adresse via PCIe and RMRR. RMRR is used by HP to communicate some health data of hardware inside theier systems.

WTF?

They never mentioned this when selling theier stuff.

I swear, every time I run into trouble with one of my two HP serves, its because f*cking HP did something jucie with their hardware or the firmware on that hardware.

And gues what, in all my searches, this document never showed its self!

http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=emr_na-c04781229&sp4ts.oid=5249566

HP is verry efektive in hiding such papers from google search.

(i found it from here: http://community.hpe.com/t5/ProLian...MU-domain-attach-due-to-platform/td-p/6751904)

HP should ship a life time supply of tranquilizer with each machine they sell, just to calm the admins using thery buggy hardware. Im so sick of this company.

You know what, ive got a technical support with my SAAPs2 licence so i will do as adviced of dmesg, i will contact my platform vendor, and since the machine and the controler is of HP, thy cant say its a problem of somebody else. And I sure make em work until I get a solution for this. I got one year to anoy them. One year I did pay for support, a feature I never wanted to buy but was required to, to get the SAAPs licence.

Anyway i got an other smart chache problem as well, so well see how they do eating thier own s*ite. I got 2x 512Gb SSD and the smartcache only accepts 785GB of cache size. the rest cant be assigned. nighter to the 1st 10Tb logical drive, nor to the second 250Gb one.

A very tired and frustrated HP customer
Manne
 
I also tried to fix this issue with the link above.. but i had no success.. can you point out what you did to get it working???
 
sure, I switched operating systems. away from proxmox, I use OMV now.

OMV is a native file storage system and provides also a virtual enviroment with virtualbox and many other addons.

Since now the File Storage is native on the machines OS, I dont need to pass through the Raid Controler to any VM.

Problem solved.

I agree its not as nice as proxmox, but it works well and reliable since months now on my ProLiant Micro Gen8. I even got to the trouble of proting the OMV installation I have in the Gen8 to my Gen5 DL380, now i have 20% of electricity and 200% of backup.

Only thing I realy miss from proxmox is the cluster thing. But alas, I could use it now with 2 machines, but since only one is runing, I dont need it, realy.

Cheers
Manne
 
Last edited:
Anyway, next time I stay with standard hardware. Could build a micro server as efficient as the HP from any standard mini ITX Hardware. Only the ILO of HP is a nice option, but im sure there are some other options out there too. I just never checked it out.
 
sure, but it does not offer the nice klicky bunty (GUI) option, and im sick of every time starting a putty and sshing into a server to change some file access rules. WEB GUI is a nice solution for such stuff. and OMV got the best ratings for NAS Storage OSs So I wanted to run the OMV as a VM, but alas, the basic problme was the PCIe passthrough. So I just dumped proxmox, and used OMV as the host OS and virtaulise with VirtualBox

no further need for PCIe passthrough for me.

anyway the PCIe passthrough should work fine on any other systems, exept HP ProLiant Servers. Unfortunatly, I was stuck with a brand new one.
 
sure could, both is based on debian as base OS but then who would want to maintain both debian deriavtes intermixed and make sure they dont kill each other? More important, dont open up some holes for crackers to highjack my machine.

I started with proxmox (because of the the proxmox kernel while OMV uses the debian kernel as far as I recall) and got OMV on top of it. I followed the install on stock debian routine from OMV.

let me tell you, thost two GUIs f*ck each other right away. It starts with the IP of the machine. :) etc. I couldnt even fix this on the comand line or by putting the same ip in all places... I had lots of fun, then decided that I dont need that bulls*ite as there is Virtualbox on OMV. Now I just start the native update routine and everything is fine. AND since the main developer of OMV is a German, I could call him and talk native, maybe even hire him to help me with a vital problem, what ever. While I had these problems with proxmox and nobody even got me a hint. just read up this very thread. Its only me posting over several days. And im for sure not the first to have this very problem. But none of the developers or cracks gave a anything about it.

well in some instances I still miss proxmox, eg. the user management and its ability to garant only certain priviliges to one user. but on the other side, I like the simplicity of VirtualBox... so, as long as I have my vhost AND my File Server in one machine AND there is a virtualisation eviroment available for OMV I will stay with OMV.

Keep in mind, I dont run high load machines, its an asterisk, a win7 for printing because one client is conected via VPN and with windows VPN-printjobs go havoc if the VPN conection or the printer is away for some reasons and a debian wheezy with mysql 4 for my small office software. thats all I virtualize. nothing fancy. I had most of it runing on a QNAP Antom system, the TS659. Back then it was a XP with mysql on windows and no asterisk. Its also a native NAS OS plus VirtualBox on top of it as a custom addon from fathermande out of france. Did work for about 6 years with no probs at all, now its just the file backup system.

And im not a console freak, yes on dos, but no clue about linux, now I did learn a bit on the way, but still I need to investigate for every command. and thats clearly to much time involved to solfe things like OMV on Proxmox Kernel or similar stuff, if not even the proxmox cracks and developers have anything to say to it.

Cheers
 
Thank you very much for your detailed answer. It is, of course, a possibility to change the operating system, but for me at the moment actually no solution. I do not really want to do without Proxmox.

I tried the HP fix but had no luck at all.. this is what i have done:
http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=emr_na-c04781229&sp4ts.oid=5249566

Aktualisieren Sie das System auf die aktuelle Firmware.
Installieren Sie die HP Health und HP Skripting-Tools RPMs. Die RPMs können über das HP Support Center oder das HP Software Delivery Repository bezogen werden. Zum Beispiel:
https://downloads.linux.hp.com/SDR/repo/


Je nach den Anforderungen für die Health-Überwachung, können diese Pakete nach dem Durchführen der BIOS-Konfiguration wieder entfernt werden.
Laden Sie diese Datei herunter, die die Conrep-Konfigurationsinformationen für die Verwaltung einzelner PCI-Steckplätze enthält:
ftp://ftp.hp.com/pub/softlib2/softwa...nrep_rmrds.xml

Erstellen Sie eine Datei (exclude.dat in diesen Anweisungen genannt) mit folgender Zeile:

Ersetzen des "X" in "SlotX" mit der Nummer des PCI-Steckplatzes, der das NIC enthält. Der Linux-Befehl "lspci" kann verwendet werden, um den physikalischen PCI-Steckplatz zu identifizieren. Die Information finden Sie auch auf der Registerkarte „Gerätebestand“ auf dem Bildschirm für die iLO-Systeminformationen.
Geben Sie folgenden Befehl, um den Konflikt für den Steckplatz zu deaktivieren (Aktivieren der PCI-Geräte Passthrough-Funktion):
# conrep -l -x conrep_rmrds.xml -f exclude.dat

Je nach Servermodell, sollte ein Output wie dieser oder ein ähnlicher angezeigt werden:

conrep 4.1.2.0 - HP Scripting Toolkit Configuration
Replication Program Copyright ©) 2007-2014
Hewlett-Packard Development Company, L.P.

System Type: ProLiant BL460c Gen8 ROM Date :
02/10/2014 ROM Family :I31 Processor
Manufacturer : Intel

XML System Configuration: conrep_rmrds.xml
Hardware Configuration:exclude.dat Global
Restriction: [3.40 ] OK

Loading configuration data from exclude.dat

Conrep Return Code: 0

Schritt 5 für jeden PCI-Steckplatz wiederholt werden, um die PCI-Geräte Passthrough-Funktion zu aktivieren.
Bestätigen Sie die BIOS-Einstellungen mithilfe des Conrep-Befehls, um die aktuelle Konfiguration zu melden: # conrep -s -x conrep_rmrds.xml -f verify.dat

Es sollte ein Output wie dieser oder ein ähnlicher angezeigt werden:

conrep 4.1.2.0 - HP Scripting Toolkit Configuration
Replication Program Copyright ©) 2007-2014
Hewlett-Packard Development Company, L.P.
System Type: ProLiant BL460c Gen8 ROM Date : 02/10/2014
ROM Family : I31 Processor Manufacturer : Intel
XML System Configuration: conrep_rmrds.xml Hardware
Configuration: verify.dat Global Restriction: [3.40 ]
OK Saving configuration data to verify.dat
Conrep Return Code: 0 Bestätigen Sie, dass nur die Steckplätze, die im vorherigen Schritt konfiguriert wurden in „verify.dat“ mit „Endpoints_Excluded“ aufgeführt sind.
Wenn Sie eines der oben aufgeführten NICs von HP verwenden, sollten Sie NICHT den „Intel Active Health System Agent für HP ProLiant-Netzwerkadapter für Linux x86_64“ installieren, der im hp-Ocsbbd RPM enthalten ist. Dieses Agent erwartet, den RMRR-Bereich für die Übertragung von Sensor- und Health-Daten zu verwenden. Dieses Agent sollte NICHT ausgeführt werden, wenn Sie der RMRR-Bereich deaktiviert ist. Wenn PPM installiert ist, muss der Agent entfernt werden.
Starten Sie den Server neu.

I downloaded the Skripting tools:
https://www.hpe.com/de/de/product-c...ng-toolkit-for-windows-and-linux.5219389.html

or direct:
http://downloads.hpe.com/pub/softli...6/hpe-scripting-toolkit-linux-10.50-41.tar.gz

I unpacked this file. In the folder "scripts" I downloaded the conrep_rmrds.xml :
ftp://ftp.hp.com/pub/softlib2/software1/pubsw-linux/p1472592088/v95853/conrep_rmrds.xml
and created the exclude.dat with:

Code:
<Conrep> <Section name="RMRDS_Slot1"
helptext=".">Endpoints_Excluded</Section> </Conrep>

Slot1 is for me, because my HP P410 Raid Controller is on the Slot1.

Then I created a bootable Iso with:
$ ./mkiso.sh

then I booted the server from this Iso and did these steps to fix the PCI Slot1 fix:

$ conrep -l -x conrep_rmrds.xml -f exclude.dat

and

$ conrep -s -x conrep_rmrds.xml -f verify.dat

The output was like in the tutorial.

after this i had to restart the server.. but nothing changed. I still get the shitty error message with pasthrough the PCI device:

Code:
kvm: -device vfio-pci,host=07:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio: failed to set iommu for container: Operation not permitted
kvm: -device vfio-pci,host=07:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio: failed to setup container for group 1
kvm: -device vfio-pci,host=07:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio: failed to get group 1
kvm: -device vfio-pci,host=07:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: Device initialization failed

Code:
TASK ERROR: start failed: command '/usr/bin/kvm -id 101 -chardev 'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/101.pid -daemonize -smbios 'type=1,uuid=5da287d8-6d3c-4b25-9dfd-f7f1a0b5b2ed' -name NAS -smp '4,sockets=1,cores=4,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga cirrus -vnc unix:/var/run/qemu-server/101.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enf orce -m 8192 -k de -readconfig /usr/share/qemu-server/pve-q35.cfg -device 'usb-tablet,id=tablet,bus=ehci.0,port=1' -device 'vfio-pci,host=07:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:801b4b2e85f' -drive 'if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=100' -drive 'file=/var/lib/vz/images/101/vm-101-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,cache=none,aio=native,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=20 0' -netdev 'type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=12:1F:B8:71:8E:16,netdev=net0,bus=pci.0,ad dr=0x12,id=net0,bootindex=300' -machine 'type=q35'' failed: exit code 1
 
Hello,
I know this is an old topic, but I also run into the same problem with my Micro server gen 8 with GPU passthrough.
Tried all steps but still the same message:
Code:
kvm: -device vfio-pci,host=07:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio: failed to set iommu for container: Operation not permitted
kvm: -device vfio-pci,host=07:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio: failed to setup container for group 1
kvm: -device vfio-pci,host=07:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio: failed to get group 1

Can you maybe help me solve this problem?
Would really appreciate that.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!