Possible to passthrough Thunderbolt controller to vm if not in isolated ioomu grouop?

soupdiver

Member
Feb 24, 2021
19
0
6
54
I have a WRX 80 Creator mainboard with an onboard Thunderbolt 4 controller.

I try to passthrough that one to a Windows VM to make connectivity easier.

I assigned the the Thunderbolt NHI and USB contoller to the VM and installed the Thuinderbolt drivers for my mainboard. The controller does show up in Windows but connected devices do not show up.

I was reading a bunch of other threads about the topic already nad it was mentioned that this only works when the controller is in its own iommu group.

When I do

```

ls /sys/kernel/iommu_groups/*/devices

/sys/kernel/iommu_groups/0/devices:

0000:60:01.0 0000:61:00.0 0000:62:01.0 0000:62:03.0 0000:64:00.0 0000:65:01.0 0000:66:00.0 0000:97:00.0

0000:60:01.1 0000:62:00.0 0000:62:02.0 0000:63:00.0 0000:65:00.0 0000:65:04.0 0000:67:00.0

```

I can see that multiple devics are in this group. The 60 and 63 are the Thunderbolt controllers but the other devices are LAN and other stuff. Is this the reason why it does not work?

Can I somehow put the THunderbolt devices in itw own group?

When I connect a usb device to the Thunderbolt hub I can passthrough that device by itself but not the whole controller.

Any ideas?
 
You cannot share devices in the same IOMMU group between VMs and/or the Proxmox host. It is fine passthrough only one device of a IOMMU group, it's just that the Proxmox host loses connection to all the other devices in the same group. It should not interfere with the working of the device. You might as well pass the other devices to the same VM because they cannot be used by anything else. Your motherboard (BIOS and physical PCIe layout) determine the IOMMU groups.

Your TB controller might not reset properly (or not work with passthrough at all). Try early binding it to vfio-pci to make sure Proxmox does not touch the device before the VM starts. Check with lspci -k to make sure vfio-pci is the driver in use before starting the VM (and after a reboot of the Proxmox host). Maybe you can get it to work once that way.
 
> You cannot share devices in the same IOMMU group between VMs and/or the Proxmox host.
I see, but is it normal that the controller shows up in the vm but then just does not work properly?

> Your TB controller might not reset properly (or not work with passthrough at all). Try early binding it to vfio-pci to make sure Proxmox does not touch the device before the VM starts.

Code:
60:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Root Complex
    Subsystem: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Root Complex
60:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Device 164f (rev 01)
    Subsystem: Advanced Micro Devices, Inc. [AMD] Device 164f
60:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    Kernel driver in use: vfio-pci
60:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
    Kernel driver in use: pcieport
60:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    Kernel driver in use: vfio-pci
60:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    Kernel driver in use: vfio-pci
60:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
    Kernel driver in use: pcieport
60:03.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
    Kernel driver in use: pcieport
60:03.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
    Kernel driver in use: pcieport
60:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    Kernel driver in use: vfio-pci
60:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    Kernel driver in use: vfio-pci
60:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    Kernel driver in use: vfio-pci
60:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
    Kernel driver in use: pcieport
60:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    Kernel driver in use: vfio-pci
60:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
    Kernel driver in use: pcieport
61:00.0 PCI bridge: Intel Corporation Thunderbolt 4 Bridge [Maple Ridge 4C 2020] (rev 02)
    Kernel driver in use: pcieport
62:00.0 PCI bridge: Intel Corporation Thunderbolt 4 Bridge [Maple Ridge 4C 2020] (rev 02)
    Kernel driver in use: pcieport
62:01.0 PCI bridge: Intel Corporation Thunderbolt 4 Bridge [Maple Ridge 4C 2020] (rev 02)
    Kernel driver in use: pcieport
62:02.0 PCI bridge: Intel Corporation Thunderbolt 4 Bridge [Maple Ridge 4C 2020] (rev 02)
    Kernel driver in use: pcieport
62:03.0 PCI bridge: Intel Corporation Thunderbolt 4 Bridge [Maple Ridge 4C 2020] (rev 02)
    Kernel driver in use: pcieport
63:00.0 USB controller: Intel Corporation Thunderbolt 4 NHI [Maple Ridge 4C 2020]
    Subsystem: Intel Corporation Device 0000
    Kernel driver in use: vfio-pci
    Kernel modules: thunderbolt

I also tried adding the thunderbolt module to /etc/modprobe.d/blacklist.conf but it still shows up in the output



find /sys/kernel/iommu_groups/ -type l
Code:
/sys/kernel/iommu_groups/0/devices/0000:62:02.0
/sys/kernel/iommu_groups/0/devices/0000:60:01.1
/sys/kernel/iommu_groups/0/devices/0000:62:01.0
/sys/kernel/iommu_groups/0/devices/0000:63:00.0
/sys/kernel/iommu_groups/0/devices/0000:62:00.0
/sys/kernel/iommu_groups/0/devices/0000:60:01.0
/sys/kernel/iommu_groups/0/devices/0000:97:00.0
/sys/kernel/iommu_groups/0/devices/0000:62:03.0
/sys/kernel/iommu_groups/0/devices/0000:61:00.0
/sys/kernel/iommu_groups/19/devices/0000:40:08.1
/sys/kernel/iommu_groups/47/devices/0000:01:00.0
/sys/kernel/iommu_groups/37/devices/0000:00:02.0
/sys/kernel/iommu_groups/9/devices/0000:d5:00.0
/sys/kernel/iommu_groups/27/devices/0000:20:07.0
 
Last edited:
63:00.0 USB controller: Intel Corporation Thunderbolt 4 NHI [Maple Ridge 4C 2020] Subsystem: Intel Corporation Device 0000 Kernel driver in use: vfio-pci Kernel modules: thunderbolt
This looks fine, if it's after a reboot of the host and before starting the VM, because the driver in use is vfio-pci. That the actual driver for the device is thunderbolt is not important.

EDIT: I can't really help with this device or Windows, sorry. I just wanted to let you know that it's not because of other devices in the same IOMMU group.
 
Last edited:

This looks fine, if it's after a reboot of the host and before starting the VM, because the driver in use is vfio-pci. That the actual driver for the device is thunderbolt is not important.
yea that's how it looks after a reboot but devices dont show up in windows.

Another thing I wonder:
After a reboot I can see the usb controller and network interface of the Thunderbolt hub in lspci

66:00.0 USB controller: Fresco Logic FL1100 USB 3.0 Host Controller (rev 10)
Subsystem: Belkin FL1100 USB 3.0 Host Controller
Kernel driver in use: xhci_hcd
Kernel modules: xhci_pci
67:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
Subsystem: Belkin I210 Gigabit Network Connection
Kernel driver in use: igb
Kernel modules: igb

Are those expected to show up even when I blacklist the thunderbolt module?

Edit: After starting the vm those devices disappear from the lscpi output. I guess that is expected when the VM takes over control of the controller.
But still.. no connectivity or error in the vm itself :/

Edit2: As next step I tried to pass through the Belkin usb controller and that actually seems to work.

However I still have a last question: My idea is to also run my display over this dock. So I only have one cable for all external connctions to that VM.
I assume video is not handled through the USB controller. But nothing that seems to belong to graphics shows up anywhere. That's why I thought passing through the whole controller is best and then it maybe "just" works. But now I'm not sure where to continue
 
Last edited:
thunderbolt is mean to extend the pci line up to a device. So you can<t pass that <internal> system handshake to a vm. And as prox is not windows, you cannot plug or replug device. if you want some dock can have function pass to a vm, but most part do not work. And very specific version of prox were working fine with a gpu in a tb dock direct to a win vm, but majority of prox version do not work fine. 7.1something and 6.4 were quite good when i did used them. But direct having Amd is not a good start for tb.
 
have you solved the problem?
Not entirely in a way how I imagined it.
As @Docop2 pointed out it does not work by passing through the entire Thunderbolt stack. It still... kinda works... but not ideally.
I passed through the pcie device that seems to be my thunderbolt controller and when I plugin things into my Dock they actually show up in the VM.
I even connected my gpu with 2 Displayport cables to the DP in of the Thunderbolt controller on the board and plugged in an HDMI cable into the dock. I actually got a video signal, but a distorted one. I did not investigate this further....
I still have the setup running but it has quirks... every now and then the usb-c cable of the hub becomes disconnected and things brings downthe whole vm because you take away the active pcie device. This also puts the whole system in a shaky state sometimes... and needs reboot.

All in all... I would not recommend it. It seem much more stable to connect things to the dock and pass through deivces 1 by 1. My initial idea was to have video and usb and all with just one passthrough device but we're not quite there yet it seems.
 
Not entirely in a way how I imagined it.
As @Docop2 pointed out it does not work by passing through the entire Thunderbolt stack. It still... kinda works... but not ideally.
I passed through the pcie device that seems to be my thunderbolt controller and when I plugin things into my Dock they actually show up in the VM.
I even connected my gpu with 2 Displayport cables to the DP in of the Thunderbolt controller on the board and plugged in an HDMI cable into the dock. I actually got a video signal, but a distorted one. I did not investigate this further....
I still have the setup running but it has quirks... every now and then the usb-c cable of the hub becomes disconnected and things brings downthe whole vm because you take away the active pcie device. This also puts the whole system in a shaky state sometimes... and needs reboot.

All in all... I would not recommend it. It seem much more stable to connect things to the dock and pass through deivces 1 by 1. My initial idea was to have video and usb and all with just one passthrough device but we're not quite there yet it seems.

It would have been interesting to test your configuration with the pcie_acs_override=downstream option enabled in Grub. It should make it possible to passthrough devices in the same IOMMU group. Note that it comes with important security and stability implications.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!