Running multiple VMs on 1 NVME via NVME to PCIe 3.0 Adaptor Question

KBlast

Member
Jan 18, 2021
12
1
8
123
Hello, I am new to proxmox and VMs.

I have 2x 480gb NVme2 (Corsair Force MP510) and 1x 1TB NVME (970 evo plus) and 3x NMVE2 to PCIe 3.0 Adaptors. There is no NVMe2 connection on my motherboard

For VMs that I want to run concurrently, can I run multiple VMs per NVME? If so, that is accomplished via partitioning and/or through proxmox? If I can run multiple VMs from the same NVME, I may try to get NVME to PCIe 3.0 adaptors that can hold 2 or even 3 of the NVME2 at once. On PCIe 3.0 what would be the upper limit of VMs that could be running through 1 slot and accessing the NVMe2s?

Or am I limited to passing through one PCIE slot per VM and each NVME can only be accessed by whichever VM it's passed through to?

Thanks!
 
A follow-on question. I'll have FreeNAS virtualized and I'll passthrough an HBA PCIE card (LSI (SAS9207-8i) so all those HDD will be for FreeNAS, but I will also have some regular SSDs connected to the mother board via normal SATA.

Could those normal SSDs service multiple VMs? Also curious if I need to partition and/or do it through proxmox simply allocating the space? Any idea for your typical SSD how many VMs can run at once on a single SSD?
 
Last edited:
Hello, I am new to proxmox and VMs.

I have 2x 480gb NVme2 (Corsair Force MP510) and 1x 1TB NVME (970 evo plus) and 3x NMVE2 to PCIe 3.0 Adaptors. There is no NVMe2 connection on my motherboard

For VMs that I want to run concurrently, can I run multiple VMs per NVME? If so, that is accomplished via partitioning and/or through proxmox? If I can run multiple VMs from the same NVME, I may try to get NVME to PCIe 3.0 adaptors that can hold 2 or even 3 of the NVME2 at once. On PCIe 3.0 what would be the upper limit of VMs that could be running through 1 slot and accessing the NVMe2s?
Keep in mind how the PCIe adaptors work. NVMEs SSDs need 4 PCIe lanes. There are cards with a build in controller being able to do bifurication on the card itself. If you get such a expensive card with controller you can add it to a PCIe 8x or PCIe 16x slot and it will bifuricate these 16 lanes to 4x 4 lanes so you can use 4 NVMe SSDs. But there are also cards without a controller that are just routing the lanes from the PCIe port to the M.2 slots. If you got such a card your mainboard needs to support bifurication, otherwise you are only able to use 1 of the 2 or 4 M.2 slots even if the card is added to a electrically connected PCIe 16x slot.
You can run several VMs from one NVMe SSD as long as your are fine with virtualization and don't want to PCI passthrough the NVMe SSD.
Or am I limited to passing through one PCIE slot per VM and each NVME can only be accessed by whichever VM it's passed through to?
If you want to passthrough a NVMe you will need one NVMe for each VM.
A follow-on question. I'll have FreeNAS virtualized and I'll passthrough an HBA PCIE card (LSI (SAS9207-8i) so all those HDD will be for FreeNAS, but I will also have some regular SSDs connected to the mother board via normal SATA.

Could those normal SSDs service multiple VMs? Also curious if I need to partition and/or do it through proxmox simply allocating the space? Any idea for your typical SSD how many VMs can run at once on a single SSD?
Yes, you can use the SSDs for multiple VMs. Proxmox can partition them for you.
 
My adaptors are 1:1 right now. They only hold 1 NMVE each, but to save PCIE 3.0 slots I am thinking of getting a card that holds more. Are you saying with the adaptor that can hold 2+ NVME it needs to have it's own controller or my mother board must support bifurcation?

My motherboard is Asrock EP2C612 WS. I am not 100% sure it supports bifurcation. I am waiting for my RAM to arrive so I can't boot into BIOS to check.

I did locate the manual found this screenshot of one BIOS page - see attached. I can't see the options for each selection. The manual doesn't list these but says "The default is [auto]"

Hoping I can confirm this. If it would work, I am thinking of sending back these single slot adaptors and picking up ASUS Hyper M.2 X16 PCIe 3.0 X4 Expansion Card V2 Supports 4 NVMe M.2. Any suggestions on cards to hold 2-4 NVMe2 cards?
 

Attachments

  • bios_pcie.JPG
    bios_pcie.JPG
    109.4 KB · Views: 8
My adaptors are 1:1 right now. They only hold 1 NMVE each, but to save PCIE 3.0 slots I am thinking of getting a card that holds more. Are you saying with the adaptor that can hold 2+ NVME it needs to have it's own controller or my mother board must support bifurcation?
yes
My motherboard is Asrock EP2C612 WS. I am not 100% sure it supports bifurcation. I am waiting for my RAM to arrive so I can't boot into BIOS to check.

I did locate the manual found this screenshot of one BIOS page - see attached. I can't see the options for each selection. The manual doesn't list these but says "The default is [auto]"

Hoping I can confirm this. If it would work, I am thinking of sending back these single slot adaptors and picking up ASUS Hyper M.2 X16 PCIe 3.0 X4 Expansion Card V2 Supports 4 NVMe M.2. Any suggestions on cards to hold 2-4 NVMe2 cards?
If it supports bifurication you should be able to select a single slot in bios and change between different values for that like 16x, 8x4x4, 4x4x4x4 and so on.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!