First build for the enterprise

afinan

New Member
Jul 1, 2025
1
1
3
Hi,

I work at a place that makes use of VMWare on HPE and DELL hardware. I'm pushing for adoption of Proxmox and with the recent VMWare changes ($$), the company is open to suggestions.

Many of the sysadmins here use Proxmox at home and welcomed the idea, however to convince corporate I need to show a solid overall solution, starting with the hardware.

Corporate is willing to allocate a budget to purchase a test server, and I'm thinking of this Supermicro GrandTwin:
https://www.supermicro.com/en/products/system/grandtwin/2u/as -2116gt-hntf

Its a 2U chassis with 4 individual nodes, each with 4 x 2.5" bays. Would spec it with:
- AMD Turin (9005)
- 1TB RAM for each node (4TB for whole chassis)
- 4 x 7.68TB NVMe per node.
- 25G or 100G NICs (multiple ports per NIC to dedicate specific ports for Ceph).

I plan to put Proxmox on this and setup Ceph (open to suggestions, but I need to demonstrate live-VM migrations between the 4 nodes so any filesystem needs to support that).
We have SANs, but I learned snapshots are not supported with iSCSI. So for now this system will be independent (both compute and storage on the same hardware, hyperconverged sort of setup).

I'm posting this to get feedback from the community, if there are other hardware platforms that fit the bill I'd gladly take suggestions, I'm eyeing supermicro because they are generally standards-based and that goes well with the open source nature of Proxmox.

TIA
 
Last edited:
  • Like
Reactions: Johannes S
Following thread has some insights:

https://forum.proxmox.com/threads/ceph-pve-hyperconverged-networking.166261/



You might also want to read the relevant parts of the manual:

https://pve.proxmox.com/wiki/Deploy...r#_recommendations_for_a_healthy_ceph_cluster

https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server

And last but not least Udos writeup on small clusters:
 
  • Like
Reactions: UdoB and afinan
We run Proxmox for 911 call centers!
We have been running it on Dell R620s and newer, Dell FX2, Dell VRXTs, HPE DL20, 320, 380, and Synergy Blade/Chassis.
Proxmox has lots of nice features for "Live Migrations". We have even migrated VMs live from 1 Proxmox cluster to a totally different Proxmox cluster across the country!

CEPH as your storage plane is a great choice, but remember, best practice, especially for an enterprise, mission-critical system, you should account for this:
OS: Dedicated 2 drives in RAID-1
CEPH: min. 3 dedicated SSD or NVME drives per node for OSDs
CEPH Network: Dedicated 10GB min. ISOLATED network

With CEPH, you are only going to get ~33% of your raw usable storage, so plan accordingly!
 
I work at a place that makes use of VMWare on HPE and DELL hardware. I'm pushing for adoption of Proxmox and with the recent VMWare changes ($$), the company is open to suggestions.
I have a similar hardware in our environment, similar experience end expectations from Proxmox - IBM Flex, HPE C7000/Synergy blade servers, IBM V7000/FlashSystem900/9xxx in FC storage area. We are looking for reliable alternative to VMware for running hundreds VMs, obviously with all nice features (VM live migrations, HA, DRS, etc). We don't use iSCSI (only FC connected storage), don't use any distributed storage for VMs (we have ~2PT Hdoop, but this is a different story).
We have created a few Proxmox clusters ("stage/dev class"of VMs") so far, all installations on blade servers, up to 3 hosts in each cluster. Most of the hosts boots from SAN, with no local drives, all clusters with dedicated shared LVM storage scenario. Regarding network - 2x10Gb LAN in servers (bandwidth is configured on blade profiles; we share i.e. each of 25Gb physical interface as 16Gb FC + 9Gb LAN, which works fine in any cases) and 2x40Gb uplinks from blade chassis. I don't believe multiple 25Gb interfaces in servers is a must have (many topics on this forum says about such setups) unless you don't transfer any heavy "disk traffic" over TCPIP/Ethernet (which is obviously even close to FC/SAN from latency, reliability, flexibility, etc).

I can also say we are about to run a few production class VMs on small dedicated clusters next month, but I would be glad to read any success stories of any larger installations here :-)

By the way, do you gays agree that Proxmox installer is absolutely not adequate for installations on enterprise hardware? I can't believe enterprise is not a target for Proxmox, so why there is no bond/vlan as it is in any curent RH Linux install?? The same about multipat which is a must have in our case.