Hello everyone,
I’m trying to understand the correct expectations and support boundaries for GPU passthrough on DGX / HGX B300 platforms when using Proxmox (KVM/QEMU).
Environment:
- Proxmox VE (9.1.4 )
- Machine type: q35
- BIOS: OVMF (UEFI)
- IOMMU enabled, vfio-pci used
- Host platform: NVIDIA DGX / HGX B300 (Blackwell)
- Guest OS: Ubuntu 24.04
- NVIDIA proprietary driver installed in guest
Test scenario:
- Assign a single B300 GPU to a VM using vfio-pci
- VM boots normally
- GPU is visible inside the VM via lspci
Inside the VM:
Observations:
For comparison:
Questions:
At this point I’m not looking for a workaround, but for clarification:
Thanks in advance.
I’m trying to understand the correct expectations and support boundaries for GPU passthrough on DGX / HGX B300 platforms when using Proxmox (KVM/QEMU).
Environment:
- Proxmox VE (9.1.4 )
- Machine type: q35
- BIOS: OVMF (UEFI)
- IOMMU enabled, vfio-pci used
- Host platform: NVIDIA DGX / HGX B300 (Blackwell)
- Guest OS: Ubuntu 24.04
- NVIDIA proprietary driver installed in guest
Test scenario:
- Assign a single B300 GPU to a VM using vfio-pci
- VM boots normally
- GPU is visible inside the VM via lspci
Inside the VM:
Code:
lspci -vv -s 01:00.0
01:00.0 3D controller: NVIDIA Corporation Device 3182 (rev a1)
Control: I/O- Mem+ BusMaster-
Region 0: Memory at 2000000000 (64-bit, prefetchable) [size=64M]
Region 4: Memory at 2004000000 (64-bit, prefetchable) [size=32M]
Kernel modules: nvidiafb, nouveau, nvidia_drm, nvidia
Code:
dmesg | grep -i nvidia
NVRM: This PCI I/O region assigned to your NVIDIA device is invalid
nvidia: probe of 0000:01:00.0 failed with error -1
NVRM: None of the NVIDIA devices were initialized
Code:
nvidia-smi
→ NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver
Observations:
- The GPU is enumerated correctly as a PCI device
- BARs are present (not 0M @ 0x0), so this is NOT a simple MMIO sizing issue
- However, BusMaster is disabled in the guest
- NVIDIA driver refuses to bind and initialize the device
For comparison:
- The same passthrough approach works on Proxmox with an NVIDIA L40S once PCI MMIO (OVMF X-PciMmio64Mb) is increased
- On B300, increasing MMIO (even to 256 GB) does not change the behavior
Questions:
- Is single-GPU passthrough on DGX / HGX B300 platforms expected to work with Proxmox/KVM?
- Is full-board passthrough (entire HGX) the only supported model on these platforms?
- Is the disabled BusMaster state a known limitation for partial passthrough on fabric-based GPUs (NVSwitch / NVLink)?
- Has anyone successfully initialized an NVIDIA B300 GPU inside a KVM VM using vfio-pci?
At this point I’m not looking for a workaround, but for clarification:
- Is this a Proxmox/QEMU limitation?
- Or an NVIDIA platform/driver limitation by design?
Thanks in advance.