Hi everyone

kingdark

New Member
Dec 5, 2025
2
0
1
Hi everyone,

I’m working on a setup with H200 SXM (HGX platform) on a Proxmox VE host, and I would like to clarify the correct way to assign GPUs to multiple VMs.

My goal:

I want to configure the system so that:

- Multiple VMs run on Proxmox
- Each VM is assigned dedicated full GPUs (not shared)
(for example: VM1 = 2 GPUs, VM2 = 2 GPUs)

What I have tried:

- PCI passthrough works correctly when assigning GPUs to a single VM
- I successfully passed GPUs to Debian and Ubuntu VMs
- I am not trying to share a single GPU between VMs (no MIG / no vGPU)

My question:

- Is it fully supported to assign multiple full GPUs to different VMs on Proxmox when using H200 SXM?
- Are there any limitations related to:
- IOMMU groups
- HGX / SXM architecture
- NVSwitch / NVLink topology
- Is there any recommended configuration to ensure stability when splitting GPUs across VMs?

What I’m looking for:

- Best practices for assigning multiple GPUs per VM
- Any known issues with H200 SXM on Proxmox in this setup
- Confirmation that this architecture is valid for production

Any guidance or real-world experience would be greatly appreciated.

Thanks!
 
i did not have access to such systems, but i think you could either do vgpu via ai enterprise, or passthrough everything, see this note in the nvidia docs:

HGX platforms only support VMs configured in full PCIe passthrough, that is, assigning the entire HGX board to a single VM on supported hypervisors. Partial-GPU passthrough isn’t supported. vGPU C-Series VMs with 1, 2, 4, or 8 GPUs per VM are only supported on VMware vSphere.
https://docs.nvidia.com/ai-enterprise/release-6/6.2/support/support-matrix.html#supported-platforms