Load balancing on Proxmox

Hi Dietmar,
I think it's a good thing in any case needed more resources than exists on one physical server
In this case, you can spread the load between different nodes using Proxmox High avilibility for data sync or any other tool / sync option

Community friends, any Reference?

Best Regards,
Star Network.
 
Hi Dietmar,
I think it's a good thing in any case needed more resources than exists on one physical server
In this case, you can spread the load between different nodes using Proxmox High avilibility for data sync or any other tool / sync option

Community friends, any Reference?

Best Regards,
Star Network.
its basically impossible to have a single VM have more resources than available on a single Hypervisor server (until Quantum computing becomes a viable reality)

load balancing itself doesnt need any thing special configured in proxmox, just configure your vm infrastructure as appropriate - IE create a load balancer vm (or use a hardware one) then create multiple vm's that actually serve your app/site and handle all the replication/sync requirements thats needed for your app
 
its basically impossible to have a single VM have more resources than available on a single Hypervisor server (until Quantum computing becomes a viable reality)

load balancing itself doesnt need any thing special configured in proxmox, just configure your vm infrastructure as appropriate - IE create a load balancer vm (or use a hardware one) then create multiple vm's that actually serve your app/site and handle all the replication/sync requirements thats needed for your app

Hi Anthony,
1. Agreed, this is why we need Load balancing :)
2. yes, it's possible to do that with hardware or "manager VM"
but if the manager will be the physical layer and not as vm, I believe it will be better in term of resource utilization

Best Regards,
Star Network.
 
there are a lot of discussions on forums where having a 'load balancer' causes a LOT more issues than manual monitoring. remember that moving a VM actually causes MORE IO for a time so on a well packed cluster you could get into a situation where a VM is moved causing IO but then as there is increased IO another VM is moved causing MORE IO and then you end up with a whole load of VMs being moved or the IO on a server going far higher than necessary.

plus if the cluster is performing 100% perfectly and you have everything running smoothly ALL the time - why would you be needed to manage the system? ;)

dont do yourself out of a job by automating everything :D
 
the key thing to note is understand your application and its performance impact, and design your system around the app, i've never had issues with "soft" load balancers like haproxy - in many cases you get better control out of soft load balances over hardware ones, and better failover capabilities in a vm setup

90% of systems i have seen the bottleneck was never the load balancer, it was always network or app server resources
 
LB need for:
  1. select less loaded host for HA migration VM.
  2. drop overloaded VM (CPU, IO) to different host.
  3. shutdown unneeded hosts and start up it again.
1. Some example about HA.
Configuration like this:
Site #1: Host #1, Host #2, Host #3, Host #4
Site #2: Host #5, Host #6, Host #7, Host #8
Site #1 and Site #2 placed in different locations.
On Host #1 and Host #5 started Storage VMs, with DRBD sync between Sites. From this storage VMs started all Hosts and VMs.
Tasks:
VM 1x in Site #1, VM 2x in Site #2
  • VM11 and VM21 must be run on any host on each site. If site down, they need to restart on another site. When site up, they must migrate back to your site.
  • VM12 and VM22 must be run on any host and any site.
  • VM13 and VM14 can't run in same host.
  • VM23 can run on any host in site, except host #1, because it heavy loaded VM.
  • VM24 can run on any host and any site. As now.
  • VM15 must run simultaneously on any two hosts on each site... Using: http://wiki.qemu.org/Features/MicroCheckpointing
And so on... I see it as host priority for VM, LB can change priority in some way.

2. All other VMs in same host with overloaded VM works not right, that's not good. Migrate overloaded VM to another host solve problem. But this is again priority for host and VMs. Look at http://www.linux-kvm.org/wiki/images/f/f7/2012-forum-postcopy.pdf

3. Last item not at all green, they save money and electricity in batteries if power fail.

Currently we have Citrix and bare KVM and we move to another solution. In your competitor this function present.

P.S. In our company sysadmins get money if they do nothing and all working. If they work, then something is wrong.
 
Last edited:
In my opinion if you need more resources than one physical hardware can provide for an app or service then virtualisation is not a good option as it means some overhead (and not just performance wise). In that case you have to design your infrastructure to use a HA load-balancer tier and a HA real-server (backend) tier. That would satisfy your needs in performance and HA too. Virtualisation is to share physical resources and not for load-balancing (although you can run soft load-balancers virtualised if it fits your needs).
 
1. You miss IMHO.
2. I don't need load balancer at any solution for load balance for app.
3. As I write before. Load balancer need to qualify enough resources to migrate. Read about RHEV(oVirt) HA or VMWare HA or migrate overloaded VM to less load host.
 
precisely. you can virtualise loadbalancers just fine (thats why LBaaS exists) , but running an application server that needs load balancing in a VM is contradictory (due to the mentioned overhead of virtualisation)
 
No application. We ISP. For example we need 2 DNS servers, 2 SIP servers, 2 mail servers and so on. Each of type must run in different hosts. No need to balance any application. Just need to place VWs on less load server if fault. But some troubles in yum autoupdate CPU goes to 100% and another VMs on server response slowly.
Repeat again: load balance ONLY VMs and Hosts, not apps.
 
In this context i really don't understand what do you mean by load balance. What you've described is more like resource balance. You migrate your VMs to balance the load on your hosts. But 'load balance' is to distribute requests between more than one real servers. Anyway, they are just terms and we use those terms differently but without proper description we can get confused. :)
 
VM 1x in Site #1, VM 2x in Site #2
  • VM11 and VM21 must be run on any host on each site. If site down, they need to restart on another site. When site up, they must migrate back to your site.
  • VM12 and VM22 must be run on any host and any site.
  • VM13 and VM14 can't run in same host.
All of that can be done with failoverdomains currently. Only slight issue is no GUI to manage failoverdomains.

  • VM23 can run on any host in site, except host #1, because it heavy loaded VM.
This would be nice to setup for specific VMs but I'd be concerned about a migration storm if it was setup for ALL VMs in a cluster.
This could also be implemented external of Proxmox using monitoring tools that can run commands based on certain conditions, like Zabbix.

Proxmox should likely have some sort of resource balancing feature some day but I feel there are more important things that time should be invested into right now.
For example, stop running KVM as root.


This sounds like a great idea on the surface but the more I think about it I wonder if its a solution looking for a problem.
If you need this sort of uptime you should design your application so it has no single point of failure.
Example:
Lets say I have a central DB server that must be running for the application to function.
So I setup this VM to run simultaneously which only protects me from hardware failure (prevents only a few minutes of downtime if hardware ever fails)
But that never happens, instead the DB server software crashes, my application is now down. (simultaneous running did nothing to prevent this problem)

If one had taken the time to ensure they had no SPOF in their design then it would not matter if a software or hardware issue took down one of your DB cluster VMs, your app would still be working.

My opinion is simple:
If you need that sort of uptime design it into your application.
 
Thanks for reply, but our application works fine in HA, HA DB and so on. Some specific apps like billing, ERP, NMS don't have this functionality and wouldn't have in far future. Now we solve all problem except hardware fail. I'm not just post links for technologies. It's already here, not a future.
 
How they help me, for proprietary protocols with closed source code? Right no help. HA inside OS used everywhere, but with software without build in HA can be fault tolerance only with VM HA.
 
I understand that using the check pointing feature could help reduce downtime for applications that don't have built in HA features. Such as some proprietary billing software you used as an example.

You only eliminate downtime from hardware failures with such a feature. Currently proxmox has the HA VM feature, if the server running the VM fails then that VM will be started on another node. The disruption to the service provided by that VM lasts a few minutes. ( however long it takes that VM to boot up )

So the check point feature might prevent ten minutes of downtime a year, assuming 2-3 hardware failures a year. This is an insignificant amount of downtime for an application that is inherently not HA.

The benefits to implementing this feature do not seem to outweigh the cost of the investment.

This feature does not even appear to be production ready anyway.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!