IOMMU groups are rearranged when i connect an eGpu Dock (oculink)

edjuh75

New Member
Jul 17, 2025
3
0
1
i have a minisforum MS-A2 running proxmox 9. freshly installed. i have 3 VM,s running on it and so far everything has been working great..
Yesterday i plugged in a Oculink Egpu to passthrough a Windows 11 VM. that worked flwlessly. Yet.. the SSD that i passthrough to another VM didn,t work anymore because the Egpu rearranged IOMMU ID 0000.09.00 to 0000.08.00. I changed the passthrough and that solved the problem and the rest of the day and night it ran beautifully.

Today i wanted to connect the eGpu again. So i did and booted up the system. No such luck. the eGpu probably rearranged IOMMU again and now the SSD my system boots from (zfs Tank) won,t start anymore. I tried to solve it by running a live rescue disk rescue the pool but with no luck.

My question. If i reinstall proxmox.. is there a way to prevent this same behaviour in the future? i would very much like to connect and disconnect a eGpu dock for my windows 11 VM.
 
It's not the IOMMU group numbers but the PCI IDs. Yes, adding or removing PCI(e) devices can cause other/later IDs to shift. This is not something that Proxmox (or Linux) can control but is determined by the motherboard, BIOS/UEFI and the physical PCI(e) lanes/chips/multiplexer layout.
 
Well i don,t mind that it shifts something. a consequent shift would be nice to a PCI ID that is still unused would do it. I can,t seem to influence the choices in the Bios so i hoped in linux/proxmox i could tweak something so it would always make the same choice. :)
So you are saying.. with eGpu no proxmox. ?
 
So you are saying.. with eGpu no proxmox. ?
No, but as a search of this forum will show is that PCI(e) passthrough and hotplug of PCI(e) (like Thunderbolt) is problematic. I just wanted to correct the difference between IOMMU groups and PCI IDs. And to inform you want PCI IDs are determined by the motherboard. Please contact your motherboard manufacturer (who also supplies the BIOS/UEFI) if the PCI IDs are causing your problems.
If you are not using PCI(e) hotplug then just connect the eGPU and correct all other PCI IDs for each VM that uses PCI(e) passthrough (and correct /etc/network/interfaces). I can only warn you about changing PCI IDs when adding (or removing) PCI(e) devices (and explain why and how they change). It's your motherboard (manufacturer) that's causing the problems.
 
Last edited:
i don,t hotplug the eGpu. The SSD that is rearranged is not passthrough to a VM. it is the boot ssd with the ZFS tank. if i could change that one PCI ID i would be happy. (minisforum MS-A2 only has one pci x16 slot for expantion)