Best architecture for running 50+ VDSs each with its own Proxmox VE on a large Dedicated Server (2 TB RAM)

UK SPEED

Member
Dec 23, 2023
88
2
13
I am planning a deployment on a very large bare-metal server (around 2 TB RAM) to host 50+ VDS, where each VDS runs its own independent Proxmox VE instance.


The key requirement is to manage this from the main host with minimal overhead, no nested-within-nested setups, and stable performance without excessive load.


My main question is:


What is the best virtualization platform on the main server to manage and isolate many Proxmox-based VDS efficiently, without performance penalties or unstable nested virtualization?

From a best-practice perspective, would you recommend:


  • Plain KVM/libvirt
  • Proxmox VE as the main host
  • XCP-ng
  • VMware ESXi
  • Or another approach?

The environment is production-focused, prioritizing predictable performance, strong isolation, and long-term stability.


Any guidance or real-world experience would be greatly appreciated.


Thank you.
 
Last edited:
Is Proxmox VE a recommended and stable approach for hosting dozens of Proxmox-based VDS instances on a single large host?
The target system doesnt really matter, so let it be Proxmox, Debian, Whatever. We're successfully running hundrets of VMs on single Nodes (which are in fact part of a cluster).
From a design perspective, is this model (many isolated Proxmox instances per VDS) preferable to running all workloads directly under a single Proxmox installation?
You'll definitly have a considerable amount of performance loss due to nested virtualisation.
What are the key performance, stability, and resource-management considerations at this scale (2 TB RAM, many VDS), especially regarding:
  • CPU scheduling
  • Memory management and overcommit
  • ZFS ARC tuning (if ZFS is used)
Highly depends on the target workload. But overall you'd need and want some indepth tuning and configuration overall.
For the Proxmox instances inside the VDS:
  • Is it generally recommended to rely mainly on LXC containers instead of nested KVM VMs?
  • Are there known limitations or risks with nested virtualization at this scale?
No. NV increases the attack surface and load on the host system in many directions but i'd suggest collecting information in the interwebs as its not a Proxmox related topic but more a technological one.
From real-world experience, would you recommend:
  • One very large Proxmox host, or
  • Splitting this capacity across multiple Proxmox nodes or clusters for better isolation and fault tolerance?
Multiple for sure.

For this specific use case (commercial VDS hosting with Proxmox per VDS), is Proxmox VE considered a better choice compared to alternatives such as plain KVM/libvirt, XCP-ng, or VMware ESXi?
Depends on your requirements. There is no way to answer that without knowing your business model and exact target audience and backends.
 
  • Like
Reactions: Johannes S
@fstrankowski
Thank you for your information

What do you think about Proxmoxes (VDSs) under the main Proxmox on the node?
My case is a hosting company that needs to sell VDSs with Proxmox inside them
 
Last edited:
There is no difference of running Proxmox inside Proxmox or Debian inside Proxmox. Its just Linux after all with some additional toolchains.
 
  • Like
Reactions: Johannes S
For your usecase which as far as i understood is that you want to offer Proxmox-Servers to your customers - sure, you gotta host these somewhere and there is no problem whatsoever in using Proxmox for that aswell.

E.g.:

Code:
Physical Host
│
└─ Proxmox VE (Host)
   │
   ├─ VM 1: Proxmox VE
   │  ├─ VM 1.1
   │  ├─ VM 1.2
   │  └─ VM 1.3
   │
   ├─ VM 2: Proxmox VE
   │  ├─ VM 2.1
   │  └─ VM 2.2
   │
   ├─ VM 3: Proxmox VE
   │  ├─ VM 3.1
   ...
 
Yes, that’s exactly what I’m already doing on one of my servers.


They suggested that it might be better to run the main server on a different, lighter hypervisor instead of Proxmox.


Also, if I have a server with 24 NVMe drives, do you think passthrough some of the NVMe drives directly to the VDS (customer Proxmox) is a good idea?
 
Also, if I have a server with 24 NVMe drives, do you think passthrough some of the NVMe drives directly to the VDS (customer Proxmox) is a good idea?
Your questions are highly unspecific and theoretical and thus cannot be answered in a general way :)
 
  • Like
Reactions: Johannes S
Hello,


To clarify my previous question, here is the exact scenario we are considering:


We have a single Proxmox host with 24 individual NVMe drives (each drive visible as a separate PCIe/NVMe device, no RAID on the host).


Our idea is the following:


  • Allocate 2× dedicated NVMe drives per customer
  • Pass those NVMe drives directly via PCIe/NVMe passthrough to a single VDS (VM) running customer-managed Proxmox
  • The NVMe drives would not be part of host ZFS/LVM/Ceph
  • Each customer would fully own and manage their two NVMe drives inside their Proxmox instance (e.g. ZFS mirror, stripe, Ceph OSDs, etc.)

The goal is:


  • Maximum storage performance (raw NVMe access)
  • Full storage isolation per customer
  • No overselling or shared storage contention

In this setup:


  • Live migration and host-level snapshots are not required
  • Backups would be handled inside the customer Proxmox, not at host level

From your experience:


  1. Do you see any technical or stability concerns with this approach on Proxmox?
  2. Are there any known limitations or caveats with long-term NVMe passthrough usage in this model?
  3. Would you consider this a reasonable architecture for high-performance, single-tenant VDS offerings?

Looking forward to your technical opinion.
 
I highly suspect your questions to come from chatgpt or some sort of. This becomes more like consulting so i'd call it a day at this point.
 
  • Like
Reactions: Johannes S
Do you see any technical or stability concerns with this approach on Proxmox?

mapping individual disks to vms is almost never the correct approach. You can and should present a dedicated vdisk on a highly performant, highly available storage option instead of bifurcating it and offering neither. Beyond performance, having a modern CoWFS under the vdisks provides fault tolerance, CoW features, survivability and business continuity- but you know the customers and use case so your approach may be correct for the use case. What you describe is doable, but leaves most of what makes virtualization so useful on the table.

There is one caveat to consider when dealing with NVMEs specifically- PCIe lanes are pinned to a cpu socket. With the described configuration you'll need to exercise care mapping vm CPU pinning to match the nvme owner.
 
  • Like
Reactions: Johannes S
no.

There is almost NEVER a use case for nested hypervisors except for development/lab use. Even if we assume there are no cpu/ram performance degradation that occurs with modern VT extensions (hint: there are) the consequences of cascading memory space governors and another level of write amplification would destroy any semblance of "performance."
 
It is okay offering, i see there is financial reason for it. Unfortunately only nested proxmox is an option. But i cannot give you performance loss.
Maybe install os on raid somewhere and add 2xnvme as passthrough for vmdata.
 
I’m running small Proxmox VPSs inside larger infrastructure based on 3 physical nodes.


Each Proxmox VPS is placed on a different physical node, and every three VPSs are linked together as a cluster using replication (no Ceph).


These are virtual Proxmox environments, not physical servers.