Network clustering with public IP pool

fede843

New Member
Mar 13, 2023
4
0
1
Hello, I have been researching since a while and I am not completely sure on how to achieve a proper configuration for our environment.

To start with, we have 3 physical servers in a remote datacenter. We are given a public IP pool, let's say 65.65.65.0/29. Servers have 2 NICs each, eth0 connected to the main switch for the public IP access, the eth1 to a different network, isolated for the rest at the moment.

Requirements/Plan:

- Firstly put all 3 servers in cluster mode. I will use eth1 network to get them talking.
- Then, most of VMs should be under the local LAN scope, but with internet connection.
- Some few VMs, will use a public IP straight, since they will work as load balancer, ingress points, etc.
- Will create a container on each node, to run Tailscale advertising the local network, to access the Promox Web UI and SSH thru it, securing the remote access to only that.

My original approach was this (for each node changing the IPs accordingly):

Code:
/etc/network/interfaces

iface eth0 inet manual

iface eth1 inet manual

auto vmbr0
iface vmbr0 inet static
    address 65.65.65.1/29
    gateway 65.65.65.6
    bridge-ports eth0
    bridge-stp off
    bridge-fd 0

auto vmbr1
iface vmbr1 inet static
    address 172.16.0.1/24
    bridge-ports eth1
    bridge-stp off
    bridge-fd 0

        post-up   echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up   iptables -t nat -A POSTROUTING -s '172.16.0.0/24' -o vmbr0 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s '172.16.0.0/24' -o vmbr0 -j MASQUERADE

This works, but with some limitations:

- 1) If I mirror this in each server, each one will be using one public IP to start with. I don't want that.
- 2) Then, for the internal LAN, I don't care to use extra IPs, and actually I will need those ones for accessing the PVE Web UI, SSH, etc. But, for the VMs when it comes to set up a gateway, I need to use static IP, which can be any of the PVE ones, but if any goes down, I might loose the entire public connectivity in all VMs or some VMs, depending on how I configure them. Don't what that neither.
- 3) Corosync protocol will be using same network as the internal LAN.

For the LAN issue 2), I can ask the colocation guys to add a rule and probably set up a default gateway IP to forward outside, and that way I could just remove al the local post routing rules I am doing now. That way I delegate upwards the "high availability" duty. All VMs will use that only static IP as default GW.

For 3), well, I will have to go with it at the moment. Getting both public and local domains under same NIC not sure if it is viable here neither. The quick solution for this would be adding a new NIC, but to start with, I will go with it and see how behaves. It will depend on the latency under real usage.

The 1) issue is the one I cannot figure out. I do not want assign a public IP to the PVE server. Firstly it represent a threat. And secondly a wast of the reduced pool we got. But I need somehow to create a bridge or something else for some production key VMs to connect and use those public IPs.

I hope all this makes any sense for you reader. Open to suggestions and looking for advise.
 
Last edited:
Answering myself.

For 1) it is fairly simple, just removing the IP and gateway from vmbr0. Tried it before, but I was not able to get it working, because I am using it as a "router" for the LAN. As soon I removed the IP from vmbr0 I was loosing the entire local network internet access. That plus some weird issues about changing stuff on the fly. Proxmox was not happy when making different changes on the fly, with some VMs running on the cluster. It did not complaint, but VMs' network stopped working. Had to restart the system to bring all back healthy.

But, sadly for the supposed minor issue 2), the colo came back saying it is not possible to do that, at least as it is right now from their side. The options are to buy some extra HW, basically a router, or implement some VMs with floating IPs. Since I needed those VMs to act as entry point from the public internet, I can do the same in the internal side, with a local LAN floating IP. That floating IP will become my LAN gateway. Not ideal, but I think I can pull it off.

Still, I am open to suggestions about the network planning.
 
how about:
Create Linux bridge for each nodes with a public IP (Firewalled to control host access)
Create a second bridge for the non-public subnet with option to VLAN and advertise additional public IPs using something like FRR)
Control ingress by using a reverse Proxy on a VM to the others on the private subnet.
 
Hi, thanks for replying.
How would be the LAN gateway defined? Wouldn't it be node dependant?
how about:
Create Linux bridge for each nodes with a public IP (Firewalled to control host access)
Create a second bridge for the non-public subnet with option to VLAN and advertise additional public IPs using something like FRR)
Control ingress by using a reverse Proxy on a VM to the others on the private subnet.
 
Sorry if i misunderstood your question but you could use VRRP to share between your nodes.
OK, so you are suggesting to use VRRP between PVE nodes right? I haven't used VRRP yet. That would mean they need a public IP each, right? In a way it is a similar approach to my VM routers, but using the nodes.

I could just use VRRP on those VMs and it would be the best of each world? Keeping PVE behind NAT and having a floating IP (in this case via VRRP protocol) for the LAN gateway.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!