Datacenter Clustering with Virtual Router on Master Node

mightyschwartz

New Member
Apr 20, 2016
8
0
1
42
I recently installed a proxmox powered server in a datacenter for my employer. I created a VM with pfSense and put all of the VMs behind it with local 10.0.0.x IP addresses.

They now want to add 2 more nodes. My primary node has 12 Gbe NICs available and my second and third node will have 4 Gbe NICs each. I know I could add virtual adapters in pfSense and connect the new nodes as private IPs or have them pull public IPs from the Virtual IPs in pfSense.

My thoughts are that I would create 3 bridges across the 4 NICs in pfSense. Bridge 1 would handle cluster traffic, bridge 2 would handle storage sharing traffic and bridge 3 would handle the WAN connections for all 3 servers.

My question is would it be possible for the first node to be configured such that it is assigned a private IP from the pfSense guest that autostarts on it along with the second and third node so that I could cluster without renting another 1U space in the datacenter just for a physical switch.

I'm also totally open to suggestions on how to cluster and share a private network in a datacenter on the same VLAN.
 
Just a remark : if you use your 1st server as switch, then you should not use HA on the cluster (if 1st server is down, all the cluster will go down).
And an other : if the pfSense is not in HA, then if the pfSense VM is down, you will not be able to assignate IPs to your servers.
In brief : buy and install a switch (Mikrotik for example), or expect big problems if 2st server goes down :).
If you really do not want to buy a switch, then make redundant paths across your servers (at last for each server : 2 ports par traffic type, 4 would be better).
 
Just a remark : if you use your 1st server as switch, then you should not use HA on the cluster (if 1st server is down, all the cluster will go down).
And an other : if the pfSense is not in HA, then if the pfSense VM is down, you will not be able to assignate IPs to your servers.
In brief : buy and install a switch (Mikrotik for example), or expect big problems if 2st server goes down :).
If you really do not want to buy a switch, then make redundant paths across your servers (at last for each server : 2 ports par traffic type, 4 would be better).

This is for a test and demo environment, so these systems are not mission critical. Additionally, they are stored in a datacenter 15 min from my home. We have 3 servers, 3 WAN connections and 15 public IPs. So ideally, we'd like to virtualize it if possible. Would it be possible to have the one with the virtual router to also have a WAN IP?

That's the way it is now. The proxmox host has <public ip #1> and the pfsense vm has <public ip #2> with virtual IPs for the rest of our range available in pfsense, but all of the machines have local IPs. We then use NAT to attach ports to public IPs for various machines.
 
Last edited:
This is for a test and demo environment, so these systems are not mission critical. Additionally, they are stored in a datacenter 15 min from my home. We have 3 servers, 3 WAN connections and 15 public IPs. So ideally, we'd like to virtualize it if possible. Would it be possible to have the one with the virtual router to also have a WAN IP?
Yes


That's the way it is now. The proxmox host has <public ip #1> and the pfsense vm has <public ip #2> with virtual IPs for the rest of our range available in pfsense, but all of the machines have local IPs. We then use NAT to attach ports to public IPs for various machines.
You can use the same principle on every node (one pfSense per node), or use OVS (although I don't know if it will work in your environment). But simpler : create a 4th LAN for private IP VM networks (you could use vlan tagging on this one in order to create several security zones behind pfSense).
 
Yes



You can use the same principle on every node (one pfSense per node), or use OVS (although I don't know if it will work in your environment). But simpler : create a 4th LAN for private IP VM networks (you could use vlan tagging on this one in order to create several security zones behind pfSense).

In this one pfSense per node situation, would they all be on the same LAN via the datacenter VLAN and then each pfSense would have a public IP and a range of virtuals?

My desired outcome is proxmox clustering for the hosts and shared LAN across all guests in the cluster with NAT routing from any public IP to any guest OS in the cluster. Perhaps there's another way to do it?
 
With one pfSense per node, they can each have a public IP on one bridge and one private in one other.
With your server #2 and #3 switched with the #1 server, you will be able to do what you want.
The question after this is how do you obtain public IPs and how are they routed to you servers. Example with online.net : you have to do specific routing when VMs get public IPs.
 
With one pfSense per node, they can each have a public IP on one bridge and one private in one other.
With your server #2 and #3 switched with the #1 server, you will be able to do what you want.
The question after this is how do you obtain public IPs and how are they routed to you servers. Example with online.net : you have to do specific routing when VMs get public IPs.
I may still try this at some point. For now, I bought a supermicro 1u 14" deep box with a decent quad core xeon, 8gb RAM and 8 gigabit eth ports. I'll install this with 2 wan ports at 1Gbps each and enable multiwan. Then with the remaining 6 ports I'll add a mgmt and private lan connection for each. Either way I was going to have to add 1u of Colo for either a switch or a router. Figured this was more in line with what we wanted to do and takes away the possibility of the VM router going belly up and stopping it from being connected.
 
I configured an HA cluster and also thought about Pfsense, as i was already familiar with it. But the moment you want an overlay network for you VM's so they can speak to each other you need a pfsense on each proxmox with virtuaIP (=> ovs doesn't work well with multicast) or the pfsense must know routes to each proxmox (=> not fun to setup) .

This solution with proxmox i dropped because i also needed public address directly assigned to my vm's and didn't want NAT.

My cluster has on each proxmox node a bridge for the public addresses, the private, and a nat network. Each bridge is connected to the same bridge on the other proxmox server through a tinc VPN => works like a charm and no problems!!!

As I have public addresses assigned to the VM's I also implemented BGP with our hosting provider so they know how to reach the VM's on their public addresses. and implemented ucarp to create a default ha gateway for my vm network with public addresses

In your case, you can create your overlay networks with tinc, and put a pfsense on each proxmox host because nat isn't a problem for you. These pfsenses can have a virtual ip (with carp) so your lan has 1 gateway,
 
If I am understanding the scenario clearly, it is certainly possible what you are trying to achieve. With virtualized pfSense cluster and use of OpenvSwtich this sort of setup is not a problem at all. I would implement vLan all around. We have deployed similar configuration to virtualize as mush as possible in those 2 scenarios. 3 pfSense VMs will work together to provide net connection or toher services such as DHCP if you wanted. while openvswitch would do the rest of the networking work in between nodes.

You will also need a physical switch with vLan ability.
 
@ghusson for interconnecting the servers and creating an overlay network i use tinc. It works like a charm!!!
I tried ovs, the linux networking with GRE tunnels but it wasn't great and proxmox 4.4can'tt handle OSPF due to a kernel panic. so no failover

I implemented Tinc: https://www.digitalocean.com/commun...tinc-vpn-to-secure-your-server-infrastructure
It did need a little tweaking but nothing to complicated. But its bulletproof, creates a mesh, so no single point of failure. And in the startup script you can let it connect to a bridge so on boot each vmbr0 on each proxmox can be connected to eachother. as example.

Then you have your VM network fully virtualized, but it can only switch. no routing, this you must do with pfsense our routing tables
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!