Can’t passthrough p840

fisty

New Member
Oct 15, 2024
6
1
3
I am trying to make my Smart Array P840 Controller, to be passed through to a truenas VM installation. Its a DL380 Gen9 I have looked at: https://pve.proxmox.com/wiki/PCI_Passthrough https://forum.proxmox.com/threads/pci-passthrough-problems.65485/page-2
which led me to: https://community.hpe.com/t5/prolia...d-memory-features-on-microserver/td-p/7105623
it just wont budge...
Header Proxmox Virtual Environment 8.2.7 Virtual Machine 100 (truenas) on node 'pve' No Tags Logs () kvm: -device vfio-pci,host=0000:08:00.0,id=hostpci0,bus=pci.0,addr=0x10: vfio 0000:08:00.0: failed to setup container for group 59: Failed to set group container: Invalid argument TASK ERROR: start failed: QEMU exited with code 1
what am I missing ?
followed also: https://www.youtube.com/watch?v=_hOBAGKLQkI
with no avail, creating a truenas VM with UEFI
A Nvidia gpu I’m able to passthrough with no issue
 
Were you about to utilize the 12ff back plane still? Also is the 90 degree fitting necessary?
 
Last edited:
I can use all 12 yes and could even use more
90 degree not necessary, depends where you plug the lsi in
Perfect, i want to use the 2sff on the back plugged into the onboard SATA for boot, and this for my storage array. I only have the 840 as a pci so I’ll remove that and put this in place, i dont think i need the 90. What do you use for boot?
 
Oh one more thing
60cm is good enough for two cables, the last one is further away so need like 90cm to make it work
 
  • Like
Reactions: Mdhwoods
I am trying to make my Smart Array P840 Controller, to be passed through to a truenas VM installation. Its a DL380 Gen9 I have looked at: https://pve.proxmox.com/wiki/PCI_Passthrough https://forum.proxmox.com/threads/pci-passthrough-problems.65485/page-2
which led me to: https://community.hpe.com/t5/prolia...d-memory-features-on-microserver/td-p/7105623
it just wont budge...
Header Proxmox Virtual Environment 8.2.7 Virtual Machine 100 (truenas) on node 'pve' No Tags Logs () kvm: -device vfio-pci,host=0000:08:00.0,id=hostpci0,bus=pci.0,addr=0x10: vfio 0000:08:00.0: failed to setup container for group 59: Failed to set group container: Invalid argument TASK ERROR: start failed: QEMU exited with code 1
what am I missing ?
followed also: https://www.youtube.com/watch?v=_hOBAGKLQkI
with no avail, creating a truenas VM with UEFI
A Nvidia gpu I’m able to passthrough with no issue
Hey there... I have a DL360 Gen9 with the P440ar. After a ton of digging, I was able to finally get it to work with passthrough. Here's what I did:

1. In your proxmox instance, edit your /etc/default/grub file as follows:

# GRUB_CMDLINE_LINUX_DEFAULT="quiet". ---------------comment out this line. Add the line below:

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on,relax_rmrr iommu=pt intremap=no_x2apic_optout"


2. At this point, i could assign the PCI device to my UEFI vm and get passed the boot "failure with code 1." However, at the moment the vm starts, my fans would race and eventually I would get a "storage failure" alert in ILO. Fans would continue to race until the entire server was rebooted.

3. Final setting that enabled the passthrough without issue: Go into the bios, select the RAID controller, disable the shared memory function. Once this was done, everything worked as expected.

I hope this helps!
 
Hey there... I have a DL360 Gen9 with the P440ar. After a ton of digging, I was able to finally get it to work with passthrough. Here's what I did:

1. In your proxmox instance, edit your /etc/default/grub file as follows:

# GRUB_CMDLINE_LINUX_DEFAULT="quiet". ---------------comment out this line. Add the line below:

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on,relax_rmrr iommu=pt intremap=no_x2apic_optout"


2. At this point, i could assign the PCI device to my UEFI vm and get passed the boot "failure with code 1." However, at the moment the vm starts, my fans would race and eventually I would get a "storage failure" alert in ILO. Fans would continue to race until the entire server was rebooted.

3. Final setting that enabled the passthrough without issue: Go into the bios, select the RAID controller, disable the shared memory function. Once this was done, everything worked as expected.

I hope this helps!

I created an account on the forum just to say thank you! I feel lucky that I'm working on this at the same moment that you wrote it! :)

In my case, since my Proxmox boots with EFI, the steps are slightly different. I'm posting an excerpt from my documentation. This was tested on Proxmox `8.3.4` running on an HPE DL380 Gen9 on 06/03/2025:
1. First, press F9 during POST to enter System Utilities. Then, navigate to `System Configuration` -> `Virtualization Options`. Ensure that all features (`Virtualization Technology`, `Intel(R) VT-d`, and `SR-IOV`) are enabled.
2. Navigate to `System Configuration` -> `Embedded RAID 1: Smart Array P840ar Controller`. Disable the `HPE Shared Memory features` option. Save and exit.
3. Enter the Proxmox node's shell. Edit the file at `/etc/kernel/cmdline`. To the end of the first line (not as a new line), add the following:
Code:
intel_iommu=on,relax_rmrr iommu=pt intremap=no_x2apic_optout
4. Edit the file at `/etc/modules`. Add the following to the file's contents:
Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
5. Run the following two commands:
Bash:
$ proxmox-boot-tool refresh
$ update-initramfs -u -k all
6. Reboot the system. Then, verify IOMMU enablement by running the following command. If IOMMU was enabled correctly, a message similar to `DMAR: IOMMU enabled` should be visible:
Bash:
$ dmesg | grep -i iommu
7. Create your VM. Make sure you select the `q35` machine type and `OVMF (UEFI)` BIOS type (if your target OS supports it).
8. Navigate to `Hardware` under your VM and click on `Add` -> `PCI Device`.
9. Select the following options and click on `Add`:

ParameterValue
Mapped Deviceunchecked
Raw devicechecked
Deviceselect "Smart Array Gen9 Controllers"
All Functionschecked
Primary GPUunchecked
ROM-Barchecked
Vendor IDblank
Device IDblank
PCI-Expresschecked
Sub-Vendor IDblank
Sub-Device IDblank


10. Start the VM.

Hope this helps!