First build for the enterprise

afinan

New Member
Jul 1, 2025
1
1
3
Hi,

I work at a place that makes use of VMWare on HPE and DELL hardware. I'm pushing for adoption of Proxmox and with the recent VMWare changes ($$), the company is open to suggestions.

Many of the sysadmins here use Proxmox at home and welcomed the idea, however to convince corporate I need to show a solid overall solution, starting with the hardware.

Corporate is willing to allocate a budget to purchase a test server, and I'm thinking of this Supermicro GrandTwin:
https://www.supermicro.com/en/products/system/grandtwin/2u/as -2116gt-hntf

Its a 2U chassis with 4 individual nodes, each with 4 x 2.5" bays. Would spec it with:
- AMD Turin (9005)
- 1TB RAM for each node (4TB for whole chassis)
- 4 x 7.68TB NVMe per node.
- 25G or 100G NICs (multiple ports per NIC to dedicate specific ports for Ceph).

I plan to put Proxmox on this and setup Ceph (open to suggestions, but I need to demonstrate live-VM migrations between the 4 nodes so any filesystem needs to support that).
We have SANs, but I learned snapshots are not supported with iSCSI. So for now this system will be independent (both compute and storage on the same hardware, hyperconverged sort of setup).

I'm posting this to get feedback from the community, if there are other hardware platforms that fit the bill I'd gladly take suggestions, I'm eyeing supermicro because they are generally standards-based and that goes well with the open source nature of Proxmox.

TIA
 
Last edited:
  • Like
Reactions: Johannes S
Following thread has some insights:

https://forum.proxmox.com/threads/ceph-pve-hyperconverged-networking.166261/



You might also want to read the relevant parts of the manual:

https://pve.proxmox.com/wiki/Deploy...r#_recommendations_for_a_healthy_ceph_cluster

https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server

And last but not least Udos writeup on small clusters:
 
  • Like
Reactions: UdoB and afinan
We run Proxmox for 911 call centers!
We have been running it on Dell R620s and newer, Dell FX2, Dell VRXTs, HPE DL20, 320, 380, and Synergy Blade/Chassis.
Proxmox has lots of nice features for "Live Migrations". We have even migrated VMs live from 1 Proxmox cluster to a totally different Proxmox cluster across the country!

CEPH as your storage plane is a great choice, but remember, best practice, especially for an enterprise, mission-critical system, you should account for this:
OS: Dedicated 2 drives in RAID-1
CEPH: min. 3 dedicated SSD or NVME drives per node for OSDs
CEPH Network: Dedicated 10GB min. ISOLATED network

With CEPH, you are only going to get ~33% of your raw usable storage, so plan accordingly!