Stopped: Start failed: QEMU exited with code 1 [Win 10 - Wi-Fi PCIe adapter passthrough]

krish

New Member
Aug 14, 2021
7
1
1
42
Hello, I recently added a Wi-Fi adapter (Asus PCE-AC68) to my system. I first installed the drivers on Windows 10 VM and then added the "PCI Device" from "Hardware" settings.

The adapter shows up when I run the lscpci

Code:
01:00.0 Network controller: Broadcom Limited BCM4360 802.11ac Wireless Network Adapter (rev 03)

However, when I start the VM after adding the PCIe adapter, I am getting an error - Stopped: Start failed: QEMU exited with code.

Here is the output:

Code:
kvm: -device vfio-pci,host=0000:01:00.0,id=hostpci0,bus=pci.0,addr=0x10: vfio 0000:01:00.0: failed to open /dev/vfio/1: Device or resource busy
TASK ERROR: start failed: QEMU exited with code 1

I have successfully enabled PCI(e) Passthrough based on the guidance and SATA controller is running seamlessly. I didn't find any drivers for Linux or Proxmox, but is it required since I have already installed it on the Win 10 VM?

I could not find any relevant discussions on this forum. Can you please help troubleshoot this issue? Please note I don't have much experience, and this is my first setup. If you need any further details, please let me know.
 
Please post the output of lspci -nnk. Is suspect what is happening here is that the host kernel has a built-in driver that is claiming the device, thus blocking the VM from using it. Follow the guide on GPU passthrough (substitute "GPU" with "WiFi Controller") to force bind vfio-pci to the device at boot time.

Also check that your device is in an isolated IOMMU group. Please post the output of the command described here.
 
Would you please explain a little bit on how to substitute "GPU" with "WiFi Controller?" Should I follow everything from the section - "
First, find the device and vendor id of your vga card:?" Also, when you say substitute, substitute the word or some ID?
Sorry, that was unclear on my part. I didn't mean literally subsitute, just "follow the instructions for your WiFi card, even though it says GPU". But the 'lspci' output you posted shows the 'vfio-pci' driver already bound, so forget that part, that shouldn't be the problem.

Looking at the IOMMU grouping, it would seem as if the overlapping groups are what cause the error. IOMMU groups and VMs have a 1:1 mapping, that means, that when you assign a PCI device to a VM, *all* the devices in that IOMMU group (1 for your WiFi controller) are assigned to that VM (even if not visible in the guest) and cannot be assigned to a different one. Are you assigning your 0000:02:00.0 SAS device to a VM too? The same one? Please also post your VM config (qm config <vmid>).

At your own risk, you can try adding the pcie_acs_override=downstream to your kernel commandline. This will force seperate IOMMU groups at the expense of potentially stability and security.
 
  • Like
Reactions: krish
Thanks, Stefan. I am able to follow you now :-). I don't have access to the system now, but based on your response, it looks like I have not assigned or manually configured the mapping. You are right - the hostpci 02 or SAS Controller is dedicated to another VM running NAS. I followed the guide that I mentioned earlier to passthrough SAS controller, which is working perfectly. I think both 02 (SAS Controller) and 01 (WiFi controller) devices are sitting in the same group. Adding a hardware through the UI failed probably because I didn't to do any changes to the configuration file? - I have no idea. However, since I have already enabled IOMMU and other things, are there any specific steps that I need to take to map the WiFi controller to specific VM ID (through mapping or config)?
 
The GUI doesn't check for IOMMU groups, that is your responsibility. Meaning, when you assigned the WiFi card to a different VM, it most likely *did* save, and get applied, but when you then try to start the VM, it sees that IOMMU group 1 is already connected to your NAS VM (since you pass through the SAS controller which is in the same group), and thus fails to use the WiFi card - since the same group *cannot* be connected to two VMs.

The only solution is to move the WiFi card and the SAS controller to seperate IOMMU groups. This can be done in two ways:
  • try a different physical PCIe slot, for either the SAS or WiFi card
  • use the acs_override line I mentioned above - this will force every device into a seperate group, but will cause a security boundary breach (your NAS and WiFi VM could then potentially share data without the host knowing) and may introduce instabililty
 
  • Like
Reactions: krish
Thank you, Stefan. This is a great explanation. I will try using another slot to see if it makes any difference. Also, I just read somewhere that a processor with ACS feature would automatically put each device in a separate group - clearly, my processor doesn't seem to have that capability. Thanks for the suggestions.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!