Can a VM bind a specific public IP in a cluster?

hugojf

New Member
Jul 24, 2019
2
0
1
27
Hi everyone, I want to lay down some background so my question makes a little more sense and why I'm asking this here.

First, I know this is a very basic question, since I don't really know my Linux very well, I wanted to know if this setup is possible before diving any further.

BACKGROUND: I currently have 3 dedicated servers (1 public IP each), that have very different hardware, A: high single-thread performance, B: high-core count, C: mid-tier cheap server. I use them to host a few comunity servers (most of them are CS:GO). I try to split high slot count into server A, low slot count into C and other games into B. Sometimes I have to migrate servers from C to A/C to load-balance them manually, which means, changing their IPs, which is not good (95% of the people connect directly via IP).

WHAT I WANT: Is there a way to cluster all 3 servers, deploy each game-server into a VM/Container, distribute each VM equally between the IPs, and allow them to move around the cluster keeping the same public IP and port?

Basically: 30 game-servers that: 10 have public IP from server A, 10 have IP from server B, 10 from server C. This way I can balance bandwidth usage (CPU usage is way more important than bandwidth, but I can't put them in the same IP because I would go over the monthly limit).

Is this possible with Proxmox? Also, if you guys could point me the way to what I have to do to achieve this I would be very happy.

Thanks in advance!
 
This is an interesting use case. I do not have a working solution for you but I can throw in some thoughts and ideas for you to research more. I would also recommend to build a test setup to see if this can actually work.

  • Performance / Latency: For a PVE cluster to work reliably the physical nodes need to have a low latency between them (rule of thumb: <2ms).
  • The VMs / Containers need their separate internal network which can be seen from each PVE node
  • NAT the ports on which the servers are running to the internal network
There are some ways to get an internal network accross the cluster. The simplest would most likely be to have it as a separate VLAN. Either create a new bridge with the dedicated VLAN configured or set the standard bridge to VLAN aware and configure the VLAN for each VM/CT. The big question is if you are allowed / can use VLANs between all you physical hosts.

Another option might be openvswitch but I personally do not have experience with it.

What you also have to consider is added latency in the game as when a game is officially reachable over server A but actually running on server B it needs to take more hops through the internal network.

If traffic usage is an issue you would need free traffic between the physical hosts for this to be useful.
 
Thanks for the reply!

Performance / Latency: For a PVE cluster to work reliably the physical nodes need to have a low latency between them (rule of thumb: <2ms).
Since they are at the same data-center, they can ping each other with <0.300ms.

What you also have to consider is added latency in the game as when a game is officially reachable over server A but actually running on server B it needs to take more hops through the internal network.
Would it be higher than the latency between physical machines? I would be okay for <2ms added latency, my problem is complexity. While I do understand networking in general, when in comes to practical stuff, nothing makes sense haha.

If traffic usage is an issue you would need free traffic between the physical hosts for this to be useful.
My problem would be if all 30 servers connected with the same IP, that would go beyond the 20TB per machine.

After posting this question, I researched a bit more and I think that would be way to complicated to handle, or at the very least, alot of manual setup, which is also bad. Maybe if I had more public IPs I could route them to the correct Host/VM somehow (this sounds a like a "normal" use-case)? But that would imply in VMs/CT having more game-servers together (which is not bad, but I lose some granularity).

The VMs / Containers need their separate internal network which can be seen from each PVE node
That means that for each VM/CT I need one individual network right? So it wouldn't be a single on covering all VMs (sorry if that sounds dumb, I'm trying to make sense of everything).

If I wanted this to work, I would need to route traffic for reaching each Host to the VM (and that would be through the internal sub-network, right?). And that would be somehow tied to a port? That would essentially be a NAT but with multiple IPs or multiple NATs, right? And the forwarding tables would have to be manually configured (or somehow automated).

Curious to know if someone has done something similar and how it worked out, but that is seeming out of my reach.

Thanks for the reply again!
 
Thanks for the reply!

Since they are at the same data-center, they can ping each other with <0.300ms.

Would it be higher than the latency between physical machines? I would be okay for <2ms added latency, my problem is complexity. While I do understand networking in general, when in comes to practical stuff, nothing makes sense haha.

That is something that would need to be tested. I never implemented anything like this and never did anything similar in a latency sensitive environment.

My problem would be if all 30 servers connected with the same IP, that would go beyond the 20TB per machine.

After posting this question, I researched a bit more and I think that would be way to complicated to handle, or at the very least, alot of manual setup, which is also bad. Maybe if I had more public IPs I could route them to the correct Host/VM somehow (this sounds a like a "normal" use-case)? But that would imply in VMs/CT having more game-servers together (which is not bad, but I lose some granularity).
Having more VMs with public IPs on which you bundle game servers would be a more regular setup but obviously you would lose granularity. Read the relevant section in the manual to see what is possible and needs to be considered when doing so in a datacenter.

That means that for each VM/CT I need one individual network right? So it wouldn't be a single on covering all VMs (sorry if that sounds dumb, I'm trying to make sense of everything).
No, what I had in mind was to have one "internal" network just within the cluster.

I hope this explanation helps to bring it across. Instead of VLANs think of having a second network card in each of the 3 physical servers. The datacenter configures it's network so that only the servers can see each other on these NICs. Like connecting them to their own switch. Now you can run a different network there with it's own IP subnet (say 192.168.0.0/24). The VMs only have access to this network and the PVE nodes have access to the public internet and the internal network. Now it should be possible to have firewall rules on the PVE nodes that forward the incoming traffic to the internal network based on the port.

How to accomplish that internal network is the interesting part and also depends on what the datacenter can offer. It could be dedicated NICs (and the datacenter handles the isolation), a VLAN on the existing NICs (needs to be supported by the datacenter) or some kind of VPN tunnel between the three hosts, all with their own caveats regarding load, bandwidth and latency.

Maybe openvswitch can help as well but I have no experience with it.

If I wanted this to work, I would need to route traffic for reaching each Host to the VM (and that would be through the internal sub-network, right?). And that would be somehow tied to a port? That would essentially be a NAT but with multiple IPs or multiple NATs, right? And the forwarding tables would have to be manually configured (or somehow automated).

Curious to know if someone has done something similar and how it worked out, but that is seeming out of my reach.

Thanks for the reply again!

I assume that right now each game server is accessed by it's own port as you have multiple running on the same server with one IP? So yeah, on the server that has the IP under which the game server should be accessed needs to have manually configured NAT on that port forwarding the traffic to the VM/CT.

All the best figuring out what will work best ;)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!