Big proxmox installations

Fathi

Renowned Member
May 13, 2016
133
4
83
52
Tunis, Tunisia
Hello,
We are in the process of migrating our legacy vendro-locked virtualization platform and would like to know if anyone is using proxmox VE at a large scale ? Number of nodes and VMs ? Hardware configs ? Also, has anyone made a comparisson of how much it would cost to host all nodes currently on Proxmox VE on other virtualisation platforms like vmware ?

TIA
 
The costs depend on what deal you get with PVE and the other vendors.
Even a small company might be able to get a 50% discount, so it's best you do the math yourself.

I can't say much about big installations though.
 
  • Like
Reactions: Fathi
Hello,
We are in the process of migrating our legacy vendro-locked virtualization platform and would like to know if anyone is using proxmox VE at a large scale ? Number of nodes and VMs ? Hardware configs ? Also, has anyone made a comparisson of how much it would cost to host all nodes currently on Proxmox VE on other virtualisation platforms like vmware ?

TIA

Not sure if we qualify as "Big", but we run 315 CentOS7 VM's across 4 DL 380 Gen9 front ends. This is a pretty tough question as workloads can very so drastically in environments.
 
  • Like
Reactions: Fathi
Not sure if we qualify as "Big", but we run 315 CentOS7 VM's across 4 DL 380 Gen9 front ends. This is a pretty tough question as workloads can very so drastically in environments.
Thanks for the information,

Could you tell me what are the specs of these servers ? Especially the ratio Number VMs/cpu ? Shared storage ? Multipath links in the previous is true ? Interconnexion bandwith ? Cost of the whole solution ?


Tia.
 
Hello,
We are in the process of migrating our legacy vendro-locked virtualization platform and would like to know if anyone is using proxmox VE at a large scale ?
Hi,
at what point begins "large scale"?

Is an 10-node-cluster with dual-cpu and 128GB Ram (or more) large scale?

Amount of VMs to Nodes/Ram is very specific and must sized to your environment. You can have VMs, which need mostly the full power of the host, and then you have 30VMs which run without trouble on a small node (like one with 32GB Ram).
Bottleneck with virtualisation is normaly a) IO (esp. disk), b) Ram and after that cpu.

Udo
 
  • Like
Reactions: Fathi
Yes, I deploy dcos/mesos on vms, the advantage is that you can deploy multiple clusters dcos/mesos on the same cluster Proxmoxve, for different departments, without overlapping, and use the security of an IAAS ( Proxmoxve, Openstack), which is not the case if you deploy it bare-metal. The version dcos/mesos OS does not provide security. like managing users on an LDAP-Freeipa cluster ...
 
Would be interesting (for all nerds reading this forum, I think :-D ) to know the hardware size and specs of these huge install.
 
For hardware: ASUS RS700-E8 servers with 2 CPU XEON (12vcpu) and ASUS ESC 4000 G3S (To add GPU, dcos/mesos manage already GPU, soon pve?) + 5 SSD + 64 GB RAM + 2 * 10G / bits Network.
 
For hardware: ASUS RS700-E8 servers with 2 CPU XEON (12vcpu) and ASUS ESC 4000 G3S (To add GPU, dcos/mesos manage already GPU, soon pve?) + 5 SSD + 64 GB RAM + 2 * 10G / bits Network.

Interesting. Which storage are you using ?
 
Soon an Cluster external with Ceph or Gluster 4.0, Web-gui for Ovirt (Just web for Gluster not for Virtualization) can automate the deploiment...
 
26173550_559195707759857_2124837426042376660_o.jpg
 
Soon an Cluster external with Ceph or Gluster 4.0, Web-gui for Ovirt (Just web for Gluster not for Virtualization) can automate the deploiment...

No issues with gluster?
I was a strong advocate of gluster but development seems to be out of control and with no quality check
There was a corruption bug for years, fixed more or less 2 years after discovery, now there seems to be another corruption bug

That's a shame, gluster is very interesting
 
Ovirt and Redhat use Gluster (me too).
There is a new standard CSI Common Storage Interface ) that is used by other projects CAAS, like the Common Network Interface (CSI) for the network. We will see more. Our friend spirit (Alexandre Derumier -team proxmoxve- ) uses a lot of CEPH.
 
running around 4000vms here.

I'm doing multiples clusters of 16-20 nodes, 370gb memory, 2x12cores, and between 70-100 vms by node.
storage on ceph/rbd (ssd and nvme), on dedicated nodes (not proxmox). No lag, no problem since 3 years. (and with major version upgrade)

(my personal opinion about glusterfs : it's sucks. I known a lot of users with recovery failures. Don't known about version 4.0)
 
storage on ceph/rbd (ssd and nvme), on dedicated nodes (not proxmox). No lag, no problem since 3 years. (and with major version upgrade)

Could you describe your ceph environments ? How many servers, how many switches and so on. 10GBe ?

(my personal opinion about glusterfs : it's sucks. I known a lot of users with recovery failures. Don't known about version 4.0)

Not exactly right this. Gluster works, but from what I can see in dev mailing list, there isn't a real roadmap to follow, every release add tons of feature, most of the time with tons of bugs. Months ago i've asked to focus to fixes and stability adding less untested features with no success.
The corruption bug (always triggered and well known, just rebalance a sharded volume to loose data) took years to be fixed, during this years, tons of other features where added. This is non-sense, imo.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!