Hoping someone in the community can help with this. I'm unable to get the xe driver to bind to my Intel B50 on Proxmox 9.1.4 running the 6.17.2-2pve kernel (my system won't boot on the 6.17.2-4 kernel, so I've pinned to this one for now.)
For starters, here is my hardware:
Motherboard: Asrock Rack GENOAD8X-2T/BCM
CPU: AMD Epyc 9355P
I have enabled SR-IOV and re-bar support in BIOS. All other virtualization functions are enabled by default on the motherboard. I have a few other devices passthrough to other VMs without issue. I am even able to passthrough the B50 to a Windows and Ubuntu VM and it works - as an example, I can transcode on Emby using the card in a Ubuntu VM. The B50 has been updated to the Q3 2025 firmware (Q4 causes the number of VFs to drop to 2).
The problem is on Proxmox itself, the xe driver doesn't seem to be binding. I have set what I believe to be the correct kernel commands in /etc/kernel/cmdline: " intel_iommu=on amd_iommu=on iommu=pt mem_encrypt=on kvm_amd.sev=1"
However, when I run lspci -n -s e3:00 -v this is what I get:
There is no "Kernel module in use" line on the PVE host. I would like to use the card's SR-IOV function to share with a few VMs, however, without the driver binding, I'm stuck at this step. I've spent about a week on this with no luck and really hoping someone can point me in the right direction. Thanks in advance.
For starters, here is my hardware:
Motherboard: Asrock Rack GENOAD8X-2T/BCM
CPU: AMD Epyc 9355P
I have enabled SR-IOV and re-bar support in BIOS. All other virtualization functions are enabled by default on the motherboard. I have a few other devices passthrough to other VMs without issue. I am even able to passthrough the B50 to a Windows and Ubuntu VM and it works - as an example, I can transcode on Emby using the card in a Ubuntu VM. The B50 has been updated to the Q3 2025 firmware (Q4 causes the number of VFs to drop to 2).
The problem is on Proxmox itself, the xe driver doesn't seem to be binding. I have set what I believe to be the correct kernel commands in /etc/kernel/cmdline: " intel_iommu=on amd_iommu=on iommu=pt mem_encrypt=on kvm_amd.sev=1"
However, when I run lspci -n -s e3:00 -v this is what I get:
Code:
e3:00.0 0300: 8086:e212 (prog-if 00 [VGA controller])
Subsystem: 8086:1114
Flags: fast devsel, NUMA node 0, IOMMU group 13
Memory at 8de0c000000 (64-bit, prefetchable) [size=16M]
Memory at 8d400000000 (64-bit, prefetchable) [size=16G]
Expansion ROM at f0000000 [disabled] [size=2M]
Capabilities: [40] Vendor Specific Information: Len=0c <?>
Capabilities: [70] Express Endpoint, IntMsgNum 0
Capabilities: [ac] MSI: Enable- Count=1/1 Maskable+ 64bit+
Capabilities: [d0] Power Management version 3
Capabilities: [100] Alternative Routing-ID Interpretation (ARI)
Capabilities: [110] Null
Capabilities: [200] Address Translation Service (ATS)
Capabilities: [420] Physical Resizable BAR
Capabilities: [220] Virtual Resizable BAR
Capabilities: [320] Single Root I/O Virtualization (SR-IOV)
Capabilities: [400] Latency Tolerance Reporting
Kernel modules: xe
There is no "Kernel module in use" line on the PVE host. I would like to use the card's SR-IOV function to share with a few VMs, however, without the driver binding, I'm stuck at this step. I've spent about a week on this with no luck and really hoping someone can point me in the right direction. Thanks in advance.