Max gpu passtru limit?

Apr 13, 2018
18
0
21
54
Is there any limit to the numbers of gpu that can be passing true a single vm?
I ve got 12 gpu in passtru recognised and correctly grouped with iommu .
I can pass 10 gpu plus 1 default display to a single vm but I cannot add 2 others.
I use seabios or uefi same result

If I add only one more I get a message : « guest has not initialised display yet »

I create an other vm the 2 last gpu can be passing tri with no problem.

Proxmox 7
Vm guest : ubuntu
 
The maximum passthrough limit is 16 PCI devices according to $PVE::QemuServer::pCI::MAX_HOSTPCI_DEVICES, so it is not obvious to me how this limit is imposed.

What board is this? I've seen 8 PCIe slots, but 12????
 
I know there are these M.2 GPUs and PCIe 16x to 4x PCIe 4x M.2 adapter cards. But I guess thats not whats used here^^
Also would like to hear how to get 12 GPUs working in a single system.
 
So I will test Ubuntu in bare metal to see if the problem come from this specific machine.

Pcie pass tru works for 12 gpu, where $PVE::QemuServer::pCI::MAX_HOSTPCI_DEVICES, but not on a single vm . Need 2 vm to use all of them@max 10 gpu by vm. So 2 licenses.
Is there any know limit on single vm?

Does this parameter including other pci devices ? Such as network adapter?

You can have up to 20 pcie slot on this machine. Is there any solution to bypass 16 pcie passtru device limit on proxmox?

I try to virtualise frame server for rendering.
Use it first at design, then shut off and start them as frame server.

Any help welcome
 
Why do you want to have the additional virtualization layer if you passthrough everything to one single VM? What are the benefits there?
 
Why do you want to have the additional virtualization layer if you passthrough everything to one single VM? What are the benefits there?
It s not the question, but:
you can design all day and render all night on the same Bar metal machine without buying a new one. Backup is easier, snapshot are fast in case of instability problem.
 
you can design all day and render all night on the same Bar metal machine without buying a new one. Backup is easier, snapshot are fast in case of instability problem.
All that can also be archived by a non-KVM-based virtualization. ZFS works also outside of PVE (so snapshots and send/receive as backups), You could e.g. use Docker to have different workloads (there are special docker enhancements for gpgpu from nvidia), use different users or even dual-boot etc...

But back to the original question:

I can pass 10 gpu plus 1 default display to a single vm but I cannot add 2 others.
Can't you add more via the GUI or are other simply not recognized if you add them manually in your vm.conf?
 
I am looking to build a remote graphics server for my engineering department. I have a HP Z840 workstation and I am planning on putting (3) HP multi-mxm cards in. This would allow me to pass through (12) P4000 quadro mxm's to the virtual desktops, the multi mxm cards are x4x4x4x4 pci lanes, so each quadro will have 4 pci lanes. Each cpu has 40 pci lanes so I should have 80 pci lanes available to the system. I will have to put the system GPU in x4 as well. So graphics cards will use 52 pci lanes. Leaving 28 for rest of the system. 16 will go to the NVME storage. The only other card needed is a 10GB networking card. Each engineer will get a dedicated workstation in this config. I would be glad to hear any thoughts or comments about try this, thanks.
 

What's new in Proxmox VE 7.2​

  • Allow assigning up to 16 PCI(e) devices to a VM via the web interface. The previous limit of 5 has already expanded in the backend.
 
I am looking to build a remote graphics server for my engineering department. I have a HP Z840 workstation and I am planning on putting (3) HP multi-mxm cards in. This would allow me to pass through (12) P4000 quadro mxm's to the virtual desktops, the multi mxm cards are x4x4x4x4 pci lanes, so each quadro will have 4 pci lanes. Each cpu has 40 pci lanes so I should have 80 pci lanes available to the system. I will have to put the system GPU in x4 as well. So graphics cards will use 52 pci lanes. Leaving 28 for rest of the system. 16 will go to the NVME storage. The only other card needed is a 10GB networking card. Each engineer will get a dedicated workstation in this config. I would be glad to hear any thoughts or comments about try this, thanks.
Normally, you have at least one mux chip in between that routes your traffic as you need it. Best is to always go with AMD with high PCIe lane intensive applications. All Intel-based NVMe system with up to 48 slots are a total scam.
 
Normally, you have at least one mux chip in between that routes your traffic as you need it. Best is to always go with AMD with high PCIe lane intensive applications. All Intel-based NVMe system with up to 48 slots are a total scam.
Thank you, I am going to try with intel since I already have the workstation, cards, and such. Would have to buy and AMD based machine. But, I will keep that in mind if I have troubles, thanks.
 
I don't need much storage so one Samsung 4TB NVME pci card will provide all the storage or I can just go SAS. No MUX chips on the z840's primary PCIe slots. So I can get all the mxm graphic cards, system graphics and Melonex 10 gb ethernet on direct cpu PCIe. If I go SAS SSD. I don't know if it will work or not but we are going to try it!


z840_arch.JPG
 
Nice diagram and it looks good for most of the things. I thought you're putting in at least two NVMe drives.
Not sure yet, I might. I have a SSD boot drive and one PCIe nvme in hyper-v machine I am looking to replace, it has worked well. No redundancy but it's really not needed. Nothing saved on the engineers desktops.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!