[SOLVED] QEMU exited with code 1 - PCIE Passthrough not working.

djkay2637

New Member
Apr 4, 2024
17
1
3
Good afternoon All,

I have been frantically attempting to resolve an issue that, i have to admit, has been keeping me awake at night. Like lots of people, i have been migrating away from ESXI and over to PM VE. So far so good with my homelab server.

I have a fresh install of the latest 8.2.2 on a ZFS SSD, set iOMMU to on, followed every tutorial i can find but when i attempt to pass my HBA card through to a VM, i get the error below. It did work fine in ESXI so i know the bios is set correctly.


This is the error:
Code:
kvm: -device vfio-pci,host=0000:07:00.0,id=hostpci0,bus=pci.0,addr=0x10: vfio 0000:07:00.0: failed to setup container for group 50: Failed to set group container: Invalid argument
TASK ERROR: start failed: QEMU exited with code 1

There are no other devices on Group 50. I have backlisted the mpt3sas driver, however when PM boots, the disks from the SAS card are visible within Proxmox VE. ( not sure if this is relevant) When the VM starts, they do disappear.

Server is a HP DL380e G8. PCIE device:
Code:
07:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)
        DeviceName: Storage Controller
        Subsystem: Broadcom / LSI 9211-8i
        Kernel driver in use: vfio-pci
        Kernel modules: mpt3sas

The VM is a q35 type, RAM ballooning off and i am using the same settings as i did for my SuperMicro server which worked as expected.

Does anybody have any suggestions i could try?

Thank you in advance,
Kind regards,
Dan.
 
is there anything relevant visible in the journal when you tried to start the vm?

also can you post the complete output of 'dmesg' ?
 
This is the [ 51.191472] vfio-pci 0000:07:00.0: Firmware has requested this device have a 1:1 IOMMU mapping, rejecting configuring the device without a 1:1 mapping. Contact your platform vendor.

Thanks for your reply.
 
Nevermind, it actually worked!

Line from the guide
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on,relax_rmrr iommu=pt intremap=no_x2apic_optout"

What I used and worked for me
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on,relax_rmrr vfio_iommu_type1.allow_unsafe_interrupts=1 iommu=pt intremap=no_x2apic_optout"

You should use "," for "intel_iommu=on,relax_rmrr" as I added space and didnt work as well.

Thanks for your reply though! Have a good one. :)
 
Nevermind, it actually worked!

Line from the guide


What I used and worked for me


You should use "," for "intel_iommu=on,relax_rmrr" as I added space and didnt work as well.

Thanks for your reply though! Have a good one. :)

Thank you as well, this also worked for me. Slightly different since I am running on ZFS..


I updated the file:

To have this line instead of what was there: /etc/kernel/cmdline
root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet intel_iommu=on,relax_rmrr vfio_iommu_type1.allow_unsafe_interrupts=1 iommu=pt intremap=no_x2apic_optout

After saving, ran:
proxmox-boot-tool refresh
update-initramfs -u -k all

Then rebooted.

Worth mentioning I had already followed the PCIe passthrough guide, I'm passing through a Dell Perc controller (Detected as LSI SAS 2008 with Dell firmware) in a PowerEdge T640 running in JBOD mode.