thoughts about HA / clustering...

r4pt0x

Member
Jan 5, 2012
53
0
6
I'm quite stuck on working out a solution for a network/server-setup i support since a few months...

We have 3 branches connected via VPN (branch A DSL 6000, branch B&C DSL3000). The connection has quite long latencies, so i'd like to minimize traffic from client applications over this connection.

Before I took over support for this networ, they changed from several servers (1 small in branch B&C, 3 in branch A for win 2000, win2k3 server and one old suse machine) to only 3 in branch A: 1 highly overkill server running Win 2k8 32bit (but stuffed with 16GB ram...), one old win2k machine for an old server application and the suse machine.
The 2k server has now gone, the 2k8 server has been virtualized along with the suse machine plus I set up a debian VM for several networkservices and network-shares (samba). A 2k8 64bit VM is also running and will replace the 32bit VM within the next ~2 weeks.

So the current setup is 1 Server at branch A -> SPOF and painfully slow services for branch B&C
If the DSL connection of branch A is down, branches B&C are sent back to stone age. happened once, wasn't that funny...


Our major problem is the database-assisted software used for all processes within the 3 branches (warehousing, accounting, time recording etc pp) - www.werbas.de
Its a single-server configuration, based on MSSQL 2008. Every branch has its own database plus one main control-database. Setting up 3 seperate servers is possible but interconnection would not work (as the control database is needed for all branch-databases) plus it would triple the costs as it's 3 seperate setups. More cost + less functionality = no go

Speed of this service at the clients of branch B&C is acceptable, but far from optimal.

The other service that "needs" a single-server setup has - to put it mildly - an utterly horrible design, based on painfully old code. It once was a dos-based software adapted to win95 and was then only "fixed" to work with newer windows versions and as a network application via windows-shares. It relates to text-based databases and huge catalogues of uncompressed bitmap images and is even from clients at branch A not quite performant.
At branches B&C this application runs painfully slow, even ending up in timeouts causing the client software to crash/freeze (as i said - its horribly designed...).
Unfortunately its still the de-facto standard and needs to be used...



MSSQL supports clustering/replication - if we would run 3 seperate servers accessing the same clustered database there shouldn't be the risk of data corrution as the MSSQL server handles I/O and all servers have access to all the data -> same interoperability as now.
But it would also triple the cost on server licences for windows, MSSQL and the werbas application and in case of one server going down, the clients at this branch need to be reconfigured to access one of the other servers...
It also won't solve the problem with the second application.

As I haven't used HA / cluster systems yet, I set up some proxmox configurations in a small testing environment and tried several configurations with shared/cloud storage etc.
As I understand there is no way in running the same VM on 3 servers "simultaneously" and showing it a sone machine to the clients.
If i run the VM on node A and client from branch B (where node B would be located), all traffic would still go to node A, ending up with the same issues we already have due to the slow VPN connection.
Same problem when using shared storage on different nodes - the traffic would still go to one VM leaving the data located within the network of the client unused and without any advantage on connection speed.

I already thought through various configurations, but none gets around the problem of decentralizing the traffic and handle client-requests directly from the node within the network of the client.
The VPN could then be used only for synchronzation of the nodes.

As i'm quite stuck on this (or already missing the forest for the trees) i thought i just write it down here. Maybe someone has an idea or hint on this, maybe someone already had a similar problem or still has a problem alike and had some other thoughts on it...

I lready wrote down lots and lots of notes while configuring the current setup, testing and trying to find a solution on the problem - if this all leads to a functional setup i'd be happy to share these informations.
 
Maybe the solution is not to try to run a complicated cluster environment with pretty bad interconnects(internet).

Instead why not colocate the main VM server in a Data Center an connect the branches via their separate DSL uplinks ?

Every branch has the max speed that is possible via DSL and if a DSL link dies every other branch can continue working.

If you would add Clustering later you could add another server in the datacenter and/or shared storage on a third node for quick migrations/failover.


Christian


I'm quite stuck on working out a solution for a network/server-setup i support since a few months...

We have 3 branches connected via VPN (branch A DSL 6000, branch B&C DSL3000). The connection has quite long latencies, so i'd like to minimize traffic from client applications over this connection.

Before I took over support for this networ, they changed from several servers (1 small in branch B&C, 3 in branch A for win 2000, win2k3 server and one old suse machine) to only 3 in branch A: 1 highly overkill server running Win 2k8 32bit (but stuffed with 16GB ram...), one old win2k machine for an old server application and the suse machine.
The 2k server has now gone, the 2k8 server has been virtualized along with the suse machine plus I set up a debian VM for several networkservices and network-shares (samba). A 2k8 64bit VM is also running and will replace the 32bit VM within the next ~2 weeks.

So the current setup is 1 Server at branch A -> SPOF and painfully slow services for branch B&C
If the DSL connection of branch A is down, branches B&C are sent back to stone age. happened once, wasn't that funny...


Our major problem is the database-assisted software used for all processes within the 3 branches (warehousing, accounting, time recording etc pp) - www.werbas.de
Its a single-server configuration, based on MSSQL 2008. Every branch has its own database plus one main control-database. Setting up 3 seperate servers is possible but interconnection would not work (as the control database is needed for all branch-databases) plus it would triple the costs as it's 3 seperate setups. More cost + less functionality = no go

Speed of this service at the clients of branch B&C is acceptable, but far from optimal.

The other service that "needs" a single-server setup has - to put it mildly - an utterly horrible design, based on painfully old code. It once was a dos-based software adapted to win95 and was then only "fixed" to work with newer windows versions and as a network application via windows-shares. It relates to text-based databases and huge catalogues of uncompressed bitmap images and is even from clients at branch A not quite performant.
At branches B&C this application runs painfully slow, even ending up in timeouts causing the client software to crash/freeze (as i said - its horribly designed...).
Unfortunately its still the de-facto standard and needs to be used...



MSSQL supports clustering/replication - if we would run 3 seperate servers accessing the same clustered database there shouldn't be the risk of data corrution as the MSSQL server handles I/O and all servers have access to all the data -> same interoperability as now.
But it would also triple the cost on server licences for windows, MSSQL and the werbas application and in case of one server going down, the clients at this branch need to be reconfigured to access one of the other servers...
It also won't solve the problem with the second application.

As I haven't used HA / cluster systems yet, I set up some proxmox configurations in a small testing environment and tried several configurations with shared/cloud storage etc.
As I understand there is no way in running the same VM on 3 servers "simultaneously" and showing it a sone machine to the clients.
If i run the VM on node A and client from branch B (where node B would be located), all traffic would still go to node A, ending up with the same issues we already have due to the slow VPN connection.
Same problem when using shared storage on different nodes - the traffic would still go to one VM leaving the data located within the network of the client unused and without any advantage on connection speed.

I already thought through various configurations, but none gets around the problem of decentralizing the traffic and handle client-requests directly from the node within the network of the client.
The VPN could then be used only for synchronzation of the nodes.

As i'm quite stuck on this (or already missing the forest for the trees) i thought i just write it down here. Maybe someone has an idea or hint on this, maybe someone already had a similar problem or still has a problem alike and had some other thoughts on it...

I lready wrote down lots and lots of notes while configuring the current setup, testing and trying to find a solution on the problem - if this all leads to a functional setup i'd be happy to share these informations.
 
I also thought about running the VM host in a datacenter - i already have several VPS at hosteurope and used one for a VPN-test with another test-machine in branch B (which is the slowest) to test VPN speed in several directions. I managed to reduce latency by ~30% compared to the current VPN-tunnel (which is managed by an external company until contract period ends by the beginning of 2013) from branch B to branch A when using the VPS at the datacenter as VPN server. But i'm not that confident these 30% will be enough to justify the costs for a dedicated server at the datacenter and get the "stone-age-software" running at workable speed in all 3 branches.

But I did not fully dismissed a solution with a main server in a datacenter and maybe a node in each branch...
 
Terminal access would be my first choice, but due to bad experiences with terminal server applications ~15 years ago, my boss is completely inaccessible to everything that even sounds alike... I'm still working on this as it also would end the endless fixing of user-related "nothing-problems" ("i did nothing, then nothing worked..."), but i fear it's still a long and hard way to go :(
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!