16 PCI Device limitation

bpinjc

New Member
Sep 5, 2024
2
1
1
Hello,

I have installed ProxMox on a 24 NVME U.2 system and have configured TrueNAS as a VM with passthrough for the NVME drives. The challenge is that I have 21 drives and ProxMox supports only 16 PCI Devices attached to a VM in passthrough mode. I need to see all 21 drives. Right now I am attaching each drive individually to the VM as a PCI Device.

I played around with the "qm set" approach, however when I pulled a drive to emulate failure, the whole VM froze. I did read something about a parameter that may prevent this from happening, but I have to dig more into that. Ultimately, this is not my preferred method.

My question is, can I use an HBA that supports multiple NVME U.2 drives so that by adding a single HBA PCI Device, I can recognize an additional 6-8 drives using only one PCI Device attachment in the VM Hardware Configuration?

Better yet, is there any way to bypass the 16 PCI device limit?

Appreciate any responses.
 
My question is, can I use an HBA that supports multiple NVME U.2 drives so that by adding a single HBA PCI Device, I can recognize an additional 6-8 drives using only one PCI Device attachment in the VM Hardware Configuration?
i don't think that will make a difference, since the drives will still show up as pci devices and the hba is "just" a multiplexer... (though i could be wrong of course)

Better yet, is there any way to bypass the 16 PCI device limit?
no, currently there isn't without modifying the code

you can open a feature request here: https://bugzilla.proxmox.com to increase the number of pci slots though. if you do, maybe also describe the use case a little bit. increasing shouldn't be hard, but it needs modification at a few places in the back- and frontend
 
  • Like
Reactions: Kingneutron
Dominik, Thank you for your response. I will file a report. Ultimately there is a fairly common use-case to run Proxmox with TrueNAS as a VM with Passthrough drives. TrueNAS offers advanced tuning of ZFS parameters and functions with a solid user interface and reporting.

In the past 8 or more SATA/SAS drives would be recognized by merely adding a single PCI HBA. With new NVME U.2 / E.1 SSD these technologies are directly recognized as individual PCI devices. Given the 128 channels that modern CPUs support, and if reasonably possible, my suggestion would be to support up to 30 PCI devices. The logic breakdown is: 24 PCI Storage + 4 PCI GPU + 2 PCI Network.

Thanks again for the follow up to my questions.

Babak
 
  • Like
Reactions: Johannes S
@bpinjc did you file it? i just hit this limit too because of PCIE disks, my understanding is QEMU supports 32, so i think the feature request would be to match in the UI what QEMU can do.
 
  • Like
Reactions: Johannes S
Why don't you run TrueNAS directly on the Hardware?
because truenas doesn't support NVIDIA grid drivers so i have two options:
1. use proxmox as the host and have a couple of VMs to do my vGPU things and the truenas VM.
2. passthrough the GPU to truenas and then pass that through to a VM where i can install vGPU drivers and then have sub VMs in that VM to use the GPU

I chose #1 due to proxmox being the superior virtualization solution, if truenas ever support the grid drivers then yes i would move the truenas to baremetal.