Hello, I have been researching since a while and I am not completely sure on how to achieve a proper configuration for our environment.
To start with, we have 3 physical servers in a remote datacenter. We are given a public IP pool, let's say 65.65.65.0/29. Servers have 2 NICs each, eth0 connected to the main switch for the public IP access, the eth1 to a different network, isolated for the rest at the moment.
Requirements/Plan:
- Firstly put all 3 servers in cluster mode. I will use eth1 network to get them talking.
- Then, most of VMs should be under the local LAN scope, but with internet connection.
- Some few VMs, will use a public IP straight, since they will work as load balancer, ingress points, etc.
- Will create a container on each node, to run Tailscale advertising the local network, to access the Promox Web UI and SSH thru it, securing the remote access to only that.
My original approach was this (for each node changing the IPs accordingly):
This works, but with some limitations:
- 1) If I mirror this in each server, each one will be using one public IP to start with. I don't want that.
- 2) Then, for the internal LAN, I don't care to use extra IPs, and actually I will need those ones for accessing the PVE Web UI, SSH, etc. But, for the VMs when it comes to set up a gateway, I need to use static IP, which can be any of the PVE ones, but if any goes down, I might loose the entire public connectivity in all VMs or some VMs, depending on how I configure them. Don't what that neither.
- 3) Corosync protocol will be using same network as the internal LAN.
For the LAN issue 2), I can ask the colocation guys to add a rule and probably set up a default gateway IP to forward outside, and that way I could just remove al the local post routing rules I am doing now. That way I delegate upwards the "high availability" duty. All VMs will use that only static IP as default GW.
For 3), well, I will have to go with it at the moment. Getting both public and local domains under same NIC not sure if it is viable here neither. The quick solution for this would be adding a new NIC, but to start with, I will go with it and see how behaves. It will depend on the latency under real usage.
The 1) issue is the one I cannot figure out. I do not want assign a public IP to the PVE server. Firstly it represent a threat. And secondly a wast of the reduced pool we got. But I need somehow to create a bridge or something else for some production key VMs to connect and use those public IPs.
I hope all this makes any sense for you reader. Open to suggestions and looking for advise.
To start with, we have 3 physical servers in a remote datacenter. We are given a public IP pool, let's say 65.65.65.0/29. Servers have 2 NICs each, eth0 connected to the main switch for the public IP access, the eth1 to a different network, isolated for the rest at the moment.
Requirements/Plan:
- Firstly put all 3 servers in cluster mode. I will use eth1 network to get them talking.
- Then, most of VMs should be under the local LAN scope, but with internet connection.
- Some few VMs, will use a public IP straight, since they will work as load balancer, ingress points, etc.
- Will create a container on each node, to run Tailscale advertising the local network, to access the Promox Web UI and SSH thru it, securing the remote access to only that.
My original approach was this (for each node changing the IPs accordingly):
Code:
/etc/network/interfaces
iface eth0 inet manual
iface eth1 inet manual
auto vmbr0
iface vmbr0 inet static
address 65.65.65.1/29
gateway 65.65.65.6
bridge-ports eth0
bridge-stp off
bridge-fd 0
auto vmbr1
iface vmbr1 inet static
address 172.16.0.1/24
bridge-ports eth1
bridge-stp off
bridge-fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '172.16.0.0/24' -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '172.16.0.0/24' -o vmbr0 -j MASQUERADE
This works, but with some limitations:
- 1) If I mirror this in each server, each one will be using one public IP to start with. I don't want that.
- 2) Then, for the internal LAN, I don't care to use extra IPs, and actually I will need those ones for accessing the PVE Web UI, SSH, etc. But, for the VMs when it comes to set up a gateway, I need to use static IP, which can be any of the PVE ones, but if any goes down, I might loose the entire public connectivity in all VMs or some VMs, depending on how I configure them. Don't what that neither.
- 3) Corosync protocol will be using same network as the internal LAN.
For the LAN issue 2), I can ask the colocation guys to add a rule and probably set up a default gateway IP to forward outside, and that way I could just remove al the local post routing rules I am doing now. That way I delegate upwards the "high availability" duty. All VMs will use that only static IP as default GW.
For 3), well, I will have to go with it at the moment. Getting both public and local domains under same NIC not sure if it is viable here neither. The quick solution for this would be adding a new NIC, but to start with, I will go with it and see how behaves. It will depend on the latency under real usage.
The 1) issue is the one I cannot figure out. I do not want assign a public IP to the PVE server. Firstly it represent a threat. And secondly a wast of the reduced pool we got. But I need somehow to create a bridge or something else for some production key VMs to connect and use those public IPs.
I hope all this makes any sense for you reader. Open to suggestions and looking for advise.
Last edited: