[SOLVED] USB passthrough list empty after enabling IOMMU

Pheggas

New Member
May 8, 2022
18
2
3
Hey, y'all. I would like to introduce you to my setup first and then i will describe the issue. So, i have a mini PC as my workstation / server that hosts Proxmox 8 (from now on). Until now i've been running the TrueNAS VM with my connected-through-usb HDD box. Those two HDDs are connected to the server via USB and i need to have IOMMU turned on in order to passthrough PCIE interface that hosts those HDDs.

The issue:
I've blamed Proxmox 7.1 and 7.4 later for USBs not showing in the passthrough USB list (VM -> Hardware -> USB Device -> Use USB Vendor/Device ID). But as i clean upgraded the proxmox to version 8. I've noticed that TrueNAS VM refuses to boot because of IOMMU is not turned on in the grub or kernel settings (/etc/default/grub). But now i've noticed that i've seen all the devices in the USB list until i enabled IOMMU.

I need to see the USBs in order to know which USB to passthrough to my Home Assistant VM (it's zigbee controller if anyone would want to ask).

My question now is, what to do with it? It's obviously very bad to have either NAS or home automation. There must be a better way to do this. Any suggestions?

PS: I would be very thankful if anyone reply with any suggestion.
 
I do not entirely understand your problem, you've got a PVE Host with a TrueNAS guest(VM) and USB passthrough is working (?) but if you configure PCI passthrough you can't see your USB passthrough devices no more ?

First have you mapped your devices before passing them through ?
Second, please check that you've configured USB passthrough properly[1] and also pci(e) passthrough [2].



[1] https://pve.proxmox.com/wiki/USB_Devices_in_Virtual_Machines
[2] https://pve.proxmox.com/wiki/PCI(e)_Passthrough
 
Did you maybe already passthroughed a PCIe device? If the USB controller was in the same IOMMU group that USB controller would have been passyhroughed into the VM too. So the PVE host wouldn't be able to use USB anymore and therefore wouldn't be able to list any USB devices.

So I would recommend to check the IOMMU groups: pvesh get /nodes/{nodename}/hardware/pci --pci-class-blacklist ""
 
Last edited:
I do not entirely understand your problem, you've got a PVE Host with a TrueNAS guest(VM) and USB passthrough is working (?) but if you configure PCI passthrough you can't see your USB passthrough devices no more ?
Yup. Basically. I just noticed that it's just proxmox's issue / bug because there's no way i shouldn't see USBs in Proxmox after enabling IOMMU. Sometimes it shows both - PCIE devices and USBs correctly but only sometimes. See below.
Did you maybe already passthroughed a PCIe device? If the USB controller was in the same IOMMU group that USB controller would have been passyhroughed into the VM too. So the PVE host wouldn't be able to use USB anymore and therefore wouldn't be able to list any USB devices.

So I would recommend to check the IOMMU groups: pvesh get /nodes/{nodename}/hardware/pci --pci-class-blacklist ""
Thank you for suggestion. See below:

The discovery​

It turned out the USBs are recognised and displayed only if none of the VMs use USB Host Controller which is among other PCIE devices. If it is in use by any VM, the USBs are not recognised and visible by any other VM anymore. This is the major problem for me as i use HDD box to connect my HDDs to Proxmox and later to TrueNAS VM. I've been thinking of any kind of solution without actually doing anything with the hardwate side of my setup.

My decision​

As if i pass USB pcie host controller to one VM, i can't see any USB anymore from any other VM. This is major issue as i want to run TrueNAS and HomeAssistant side by side. At the end i think the only source of this issue is the HDD box itself as if i pass only USBs that corresponds to individual HDDs in the HDD box, TrueNAS throws errors that it can't read the data, headers etc. So at this point i can only choose between those two: To have a NAS or to have a Home Automation. I'm choosing both:

As the TrueNAS also has possibility of hosting VMs, i decided to choose it as my host system instead of Proxmox. By this change, i would be automatically passing whole USB host controller directly to TrueNAS as host system and i will just configure to pass Zigbee Controller stick to the HomeAssistant VM.

So the only change would be in the host OS. Just one more question. Could i create system image by using Clonezilla? I have my main Ubuntu Server VM that i need to backup and restore on the new host os. So the plan would be to backup system image of the main VM, create VM in TrueNAS and restore it (again using clonezilla). Do you think this would work?
 
Hi,

I just noticed that it's just proxmox's issue / bug because there's no way i shouldn't see USBs in Proxmox after enabling IOMMU.
That's more a hardware problem, not a software one - as mentioned above. If the whole USB controller PCIe-device is pass-thru'd (due to IOMMU grouping), the host OS can of course not see the USB controller anymore. Consumer hardware (as you do have, since you say Mini PC) mostly have shoddy firmware and/or IOMMU groups setup by the vendor. Just search around, unfortunately there's quite some people with similar problems. Just to clarify.

So the only change would be in the host OS. Just one more question. Could i create system image by using Clonezilla? I have my main Ubuntu Server VM that i need to backup and restore on the new host os. So the plan would be to backup system image of the main VM, create VM in TrueNAS and restore it (again using clonezilla). Do you think this would work?
The TrueNAS documentation mentions that should be doable here. According to that, you can also export the raw disk image and dd it directoly to the zvol created by TrueNAS for the VM. But Clonezilla should work too.
 
Hi Pheggas,
So the only change would be in the host OS
Out of interest, what limits you to use containers instead of VMs? With TrueNAS being available in Linux flavour as well, the kernel should not be a limiting factor.

I'd say that resource sharing is more flexible in a container than when using VMs, and by ditching out OS chores, it leaves more resources for 'useful' tasks in your various guests.
 
Just search around, unfortunately there's quite some people with similar problems. Just to clarify.
That's really sad. But we got what we needed. I'm happy with the hardware i chose but also really annoying as i don't have any physical PCIE.

According to that, you can also export the raw disk image and dd it directoly to the zvol created by TrueNAS for the VM. But Clonezilla should work too.
Really good catch! Thank you for linking the documentation. I will do some tests before switching for good.

Out of interest, what limits you to use containers instead of VMs? With TrueNAS being available in Linux flavour as well, the kernel should not be a limiting factor.
Honestly i'm not sure what is the difference between LXC container and VM but i assume LXC containers aren't as robust as VMs. Also not sure about the resource sharing. If i could share the PCIE USB host between two containers, i would consider staying on Proxmox but i doubt it would be possible as it isn't between VMs.
 
not sure what is the difference between LXC container and VM
For me the the question is: "Does the guest system like to run on a Linux kernel?"

If "Yes!", then I spin up a container in seconds, with hardly any resource usage (cheating a bit, because I downloaded the template earlier via local storage --> CT Templates --> "Templates" button --> download from catalog)
If "No :-(", I transfer a fitting ISO to Proxmox, allocate resources and start installing the OS.

I don't run a great many systems, and one container provides quite a few services. Over the years, I created three VMs (one of which is still running) and at least ten times as many containers (also with about a third still active).

An important benefit of containers, as you might know, is that they require fewer resources as some of the functionality is shared resource (only one kernel needs to be running for five containers for example, while five VMs need at least the memory needed for five OS installations, plus resources to pretend to be a piece of hardware).

If i could share the PCIE USB host between two containers, i would consider staying on Proxmox but i doubt it would be possible as it isn't between VMs.
It is not possible in VMs, because you tell the host OS to give up control of the resource, and pass it on to one other OS.

In containers, all containers share one kernel, with all of its features. Some features are not exposed to guest containers by default. I have not yet had the need to share PCIe devices with containers.

Searching for "LXC PCIe passthrough", I found a post that explain how to do this for graphics cards, but since they are PCIe as well as your USB card, I'd expect it to be of service in your case as well. To cite from that page:
When talking about GPU passthrough, it’s generally about PCIe passthrough: Passing a PCIe device from the host into the guest VM, such that the guest takes full control of the device. This however limits us to passing the device through only to a single VM. The host loses all access to the GPU ones passed through. This not only means that the host can’t take use the device, but also that it can’t be passed to other VMs. When passing through to LXC containers, the host OS is what handles the device communication, so several LXC containers can have access to the GPU.
Give it a try!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!