Largest Proxmox Install?

ijcd

New Member
Nov 20, 2008
17
0
1
Just wondering what the is largest number of host machines that a proxmox cluster has reached so far? Just trying to get an idea of how it scales up and when the cross-node communication starts to become slower/impractical.
 
Just wondering what the is largest number of host machines that a proxmox cluster has reached so far? Just trying to get an idea of how it scales up and when the cross-node communication starts to become slower/impractical.

There is no limitation, but maybe other users can just post their setup? As far as I know there are already some using up to 10 servers.
 
we are using 6 servers in production environment without any issues.
I have tested it with more than 20 servers (Supermicro) just for the test, before switching to proxmox (I was using OpenVZ on CentOS) and it worked very well, but the setup was with 20 Main servers in cluster with only one VPS on each of them.
 
4 servers here, avg of 6 VPSs on each server
no issues at all
Thanks Proxmox! ;)
 
Fewer Servers

My goal is to have fewer servers :) We currently have 4 racks of servers and it's hard to maintain all of them.

Right now I have 4 low-end (Dell 850) servers with Proxmox and I'm experimenting with the configurations that work best. My goal is to reduce our foot print. I'd like to reduce our servers down to one nice & dense rack. I think I can easily get 8-10 of our current servers into a single, nice 1U server. I would like to see some sort of auto migration based on load and also a way to auto-move on failure like VMWare has.

Supermicro has these awesome 1U boxes w/ PS & MB , 4 SATA Bays (I put in 4 x 1.5TB), 8 Ram Slots (64GB ram total), Dual Processors (I put in 2 Quad Core 2.6GHz), Dual 700Watt Power Supplies. I added an Adaptec 5805 RAID controller. Total cost was just $3600.

Supermicro Case:
http://www.supermicro.com/products/system/1U/6015/SYS-6015W-NTR.cfm

I haven't found a better 1U case for doing this type of stuff. The case runs about $1250 with the Dual PS and Motherboard.
 
still counting

9 real servers, 31 vps in production. I have 3 "images" each for a specific service. can't wait for the template tool cause right now i just vzdump and restore to a new veid.

By the time i'm done I should have about 30 real servers.
 
I'm also curious what kind of vps's you all run and on what hardware.
If you run 9 vps's, are these fully loaded webservers or just a single joomla install for instance.

@Marco114, I use Supermicroservers but run into severe io limitations, so make sure you test that before buying a heap of hardware.
 
I'm also curious what kind of vps's you all run and on what hardware.
If you run 9 vps's, are these fully loaded webservers or just a single joomla install for instance.

@Marco114, I use Supermicroservers but run into severe io limitations, so make sure you test that before buying a heap of hardware.


Sorry, i meant to say that those are 9 hardware nodes each has approximately 3-5 vps's on it. Our environment is a bit unique in that each we have 3 service templates (web, mail, and dns ) each service VPS serves the exact same content. The hardware is DL145 with two dual core opterons. Each hardware node has 6 Gb ram.
 
I have 2 servers with ProxMox. On the first server there is a VPS with a small webserver. And 1 vps with CRM software on it.
On the second we are running a VPS with mailinglist software.

Second VPS is more like a hot spare in case something happens with server1.

Both servers have a Core 2 Duo CPU and 8GB RAM.

I am now busy converting our main webserver to a virtual server. On that server I also want to run ProxMox. That server has 2 Dual Core Opterons, 2 SAS disks in Raid1 and 4GB RAM.
If we put some extra RAM in it I can also run the mailinglist vps on it and we can remove the second ProxMox server from the list :).
 
First post!

I have a 7 server setup for a test bed before launch:
2 Masters rsynced (Intel Atom 330 - 2GB DDRII - 750GB SATAII)
- Master 1 - 2 VMs rdns1/ns1 & ISPConfig 3 Master
- Master 2 - 2 VMs rdns2/ns2 (w/ script to change IPs and host name) & copy of ISPConfig 3 Master (stopped state)
-- Master 2 monitors (pings) Master 1 - If no response for 15 seconds takes over rdns1/ns1 IPs and host names and updates the DNS entries and starts the ISPConfig 3 Master copy. DNS servers run MyDNSConfig.

2 Slaves each with 20 running VMs (Supermicro - Intel i7 920 - 12GB DDR3 - 2x 1.5TB SATAII Raid-1)
- All ISPConfig 3 Slaves
-- More of these and some in different DCs to come

2 Off Site DNS & HAProxy (Intel Atom 330 - 2GB DDRII - 250GB SATAII)
- Off Site 1 ns3 rsynced from master 1 - LB1 (HAproxy)
- Off Site 2 ns3 rsynced from master 2 - LB2 (HAproxy) rsynced from LB1 and scripted

1 Backup Server - (Supermicro - Intel Atom 330 - 2GB DDRII - 4x 1.5TB SATAII)
- NFS mount /backup on all servers

More Slaves to be added after launch. This is a setup for a HA hosting service. We are really just waiting on v2 of proxmox before we launch. We would really like a user interface and or API for proxmox before launch so we can more easily offer VM hosting as well. I think an API would be best as it would be easier for the devs to implement and we could keep our clients out of our management systems.

Besides the above proxmox has been running like a champ through multiple tests and is the best so far out of all the solutions we have tested.
 
@Clusterhq: How are those Atom 330 servers running? Thinking about a backupserver with a Atom 330.
 
@Clusterhq: How are those Atom 330 servers running? Thinking about a backupserver with a Atom 330.

Wonderful! They are best for simple task/service servers. Just make sure what you are doing wont be to harsh resources wise and they work awesome. both masters don't see much more than 10-15% CPU load with plenty of RAM to spare. The HAProxy servers on the other hand very with our tests. We have seen the pair handle about 60,000 requests per second at about 90% load with the RAM maxed out.

We have over 300GB of scheduled backups spread throughout the day and the CPU never goes above 5-10%. Supermicro now offers a 1u version of the Atom. You can stuff 4 large SATA disks in there with a good PCI controller (only 2 available SATA headers on board) and load up Openfiler on an IDE flash module and it's the backup system of your dreams. We will use one of these for every 10 of the Slave i7 servers as we will charge our clients with VMs extra for remote backup service and include remote backup with the HA Cluster Hosting services.

As they say RAID is good but a remote backup solution is key.

We have tested many hardware configurations including large enterprise class systems and narrowed it down to these configurations for the best cost to performance ratio for the intended tasks.

We also took into account manageability & relialibility:cost as we have a one year hardware replacement schedule. We do this because of the ~60% increased likelihood of hardware and hard disk failure after the first year in these types of extremely demanding service deployments.
 
Last edited by a moderator:
First post!

I have a 7 server setup for a test bed before launch:
2 Masters rsynced (Intel Atom 330 - 2GB DDRII - 750GB SATAII)
- Master 1 - 2 VMs rdns1/ns1 & ISPConfig 3 Master
- Master 2 - 2 VMs rdns2/ns2 (w/ script to change IPs and host name) & copy of ISPConfig 3 Master (stopped state)
-- Master 2 monitors (pings) Master 1 - If no response for 15 seconds takes over rdns1/ns1 IPs and host names and updates the DNS entries and starts the ISPConfig 3 Master copy. DNS servers run MyDNSConfig.

2 Slaves each with 20 running VMs (Supermicro - Intel i7 920 - 12GB DDR3 - 2x 1.5TB SATAII Raid-1)
- All ISPConfig 3 Slaves
-- More of these and some in different DCs to come

2 Off Site DNS & HAProxy (Intel Atom 330 - 2GB DDRII - 250GB SATAII)
- Off Site 1 ns3 rsynced from master 1 - LB1 (HAproxy)
- Off Site 2 ns3 rsynced from master 2 - LB2 (HAproxy) rsynced from LB1 and scripted

1 Backup Server - (Supermicro - Intel Atom 330 - 2GB DDRII - 4x 1.5TB SATAII)
- NFS mount /backup on all servers

More Slaves to be added after launch. This is a setup for a HA hosting service. We are really just waiting on v2 of proxmox before we launch. We would really like a user interface and or API for proxmox before launch so we can more easily offer VM hosting as well. I think an API would be best as it would be easier for the devs to implement and we could keep our clients out of our management systems.

Besides the above proxmox has been running like a champ through multiple tests and is the best so far out of all the solutions we have tested.

@clusterhq! New to this kind of setup, so could you kindly elaborate a bit about the technicalities about your setup. How you achieved it? I meant where ISPConfig3 are installed, in HN or VEs? If VEs, what did you use as reverse proxy to redirect to the sites in shared hosting? Appreciate if you elaborate how you achieved it, including technicalities (I am planning to have a similar setup at home for a mini datacenter). Thanks in advance!!!
z
 
Last edited:
Successfully installed for testing purpose on:
2 Dell 2950, 2 dual core, 8Gb + iscsi
1 HP DL580g5, 4 quadcore, 64gb + SAN HP MSA2012fc
For instance, the dell hold 7 guest each. The HP holds 3 guest and more coming.

No noticeable issues.
 
Well, we certainly aren't the largest, but we are running 4 hosts with approximately 50 VM's total - all live, all production.

Our main machine is a Dell 1950 with 2 x Quad Core processors , 16GB RAM, and 4 x 15K RPM SAS Drives in RAID-5 setup.
The other 3 machines are Quad-Core Q9550's with 8GB RAM and 2 x 500GB hard drives.

We're pushing on average about 8Mbit/sec thru this setup 24/7

The biggest issue I am having is that the hosts are spending a rather large amount of time in I/O wait (mainly because of no RAID setup in the machines), I'm working on resolving this issue as we speak. ;)
 
We have 3 productive nodes, one with 6 VMs with mail- and other communicationservices,
One node contain nearly 90 VMs mostly webservices and database/CMS.
Our third node is intended as spare and backup.

Greets
Julius
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!