[SOLVED] pfSense in a VM with HA

lifeboy

Renowned Member
I'm trying to figure out how I could install a pfSense VM and make it HA. I have a ceph cluster of 4 nodes (but there will be nodes added regularly), so I can give each node a public IP address on one of it's ethernet ports. Creating a bridge and adding that to the pfSense VM will give me a WAN port that should be working from any node (same name of each node), depending on the where the Proxmox VM is running. However, pfSense assigns the WAN port to a MAC address. Each node will have a difference MAC address, so depending on which node is running the pfSense VM, the MAC address won't be found, so this is not going to work.

My question is: Will that actually work like I described above? Also, it's there a better way to do this?

The only option I can think of is that I run a pfSense VM on every node and use CARP to sync them. Then if one node goes down, CARP takes care of the failure and nothing is lost.

I'm sure many of you have thought about this or actually done this. What would you propose?
 
Last edited:
Create a bridge, add the NIC port form the public network and add the public interface of the VM. Then assign the IP inside the pfsense. This needs to be the same bridge + NIC port setup on all the nodes. Eg. eth0 -> vmbr0 -> VM | public IP.
 
Hi,

My pfense runs as a KVM with an virtio nic and a bridge attached to the nic. I then assign different vlans interface for WAN, LAN, DMZ and others in pfsense. I use NFS for my disk storage. I need to migrated it last night and works.
 
Hi,

My pfense runs as a KVM with an virtio nic and a bridge attached to the nic. I then assign different vlans interface for WAN, LAN, DMZ and others in pfsense. I use NFS for my disk storage. I need to migrated it last night and works.

All my VM's run on ceph rbd. My objective is to use Proxmox HA feature to automatically migrate the Proxmox VM to a next designated node if the node that it's running on fails. So I can't entertain having to manually change the WAN port parameters once the VM run on the new node. It seems like this is dead end.

A way forward would be to run an instance of pfSense on each node and use CARP as a mechanism to switch to a working node if the running node fails. It's wasteful, since I'll be running a pfSense on each node, but the waste is small.
 
pfSense assigns the WAN port based on the MAC address of the port (in this case the bridge). Surely I can't create a bridge on each node with the same MAC address?

Ok, I set up a test machine and it seems that pfSense binds the WAN port to the name of the interface and not the MAC address. The MAC address is part of the KVM part of the bridge, so it stays the same regardless of which node pfSense is running on. So I have two pfSense instances running with CARP doing the failover between then. Furthermore, each instance is also running in an HA config, so if anything goes wrong with a node, pfSense is automatically migrated to another node. If anything goes wrong with pfSense itself, CARP switches to the peer running on another node, so now I have double redundancy for my firewall.
 
Last edited:
  • Like
Reactions: Urbaman
Hi,

Sorry for necroing this, but I'm about to plan the same setup, and I have a few questions about it.

  • So I understand you have 4 dedicated nics (1 WAN, 1 LAN for each node) for the two pfsense instances, right?
  • While pfsense is running on one of the nodes, the other nodes are actually getting external traffic to the dedicated WAN, waiting to spin up a pfsense in case the first node crashes. So thise NICs are not protected, right? Is it ok security-wise to have so many unprotected NICs on the sistem?
Thank you very much.
 
Any better places to read more on this topic? I have OPNsense inside VM and would like to have a hot swap to go live with while I am doing updates on one or the other so network on the lan side is not affected or otherwise piss off the wife and kids just because I need to do an opdate on the proxmox node or the vm or the OPN/Pf install itself... had several updates hit lately on OPN (lots of new features and fixes - kudos) but each time I have to plan it for middle of the night updates and upgrades - which means I have to be here to manage it too... would prefer to take the specific VM out of primary place and do the updates then restore and do the updates on the backup... as OP said - would like to use Proxmox HA on the top of all this so that the VM for both primary and backup sese servers are auto migrated in the event of a node issue locally AND have the auto failover from one sense to the other if there are issues.

Really could use a good write up to read on this setup if you have one anywhere.
 
I have OPNsense inside VM and would like to have a hot swap to go live with while I am doing updates on one
Basic rule: if a service has High-Availability integrated then use that - instead of PVE-HA. In this case one would usually run a PVE cluster with two instances of OPNsense on two different PVE-Nodes.

But nothing keeps you from running two OPNsenses on one single PVE host. In any case you need to setup HA (called "Common Address Redundancy Protocol" in this context) inside OPNsense as documented in https://docs.opnsense.org/manual/hacarp.html?highlight=carp . Then you can update/maintain/shutdown/crash one of these VMs while the routing capabilities stay up and your LANs work without any interruption.

The "only" problem is: this is not trivial so setup. The good news: while these are virtual machines you can setup/manipulate a testbed and work with snaphots during the experimentation/implementation phase.

Disclaimer: I had successfully created this scenario last year but I am not using it in (my homelab in) production - while CARP and failover works as documented I had another kind of problems to migrate the ruleset of my old router configuration into the model of OPNsense...

Best regards
 
Hello

I have tried this setup and seems to be an issue, not sure if it is related to mac-spoofing features from proxmox side, although i have disabled the firewall and mac/ip filter on the vm level with pfsense installe.
their might be a way to disable spoofing cluster wide but i cannot do that since this feature is a must for multi-tenant eviorments.


If both pfsense running on the same node, no issues, all works as expected
if both pfsense running on different node, no issues, all works as expected
if each pfsense is running on a different node, the primary pfsense fails, the secondary becomes the master successfully, however, very high link loss latency between the attached vm and carp ip of pfsense. ( CARP ip link loss, icmp to interface itself has no loss! )
Pfsenses has hardware checksum offloading disabled, as recomended by netgate

all nodes and vm's inside them have proper network setup and can reach each other with no issues
- all vlans exists on switch and on proxmox nodes
- virtual pfsenses has 3 interfaces ( wan, sync, lan ), each is a separate bridge, connected to a separate vlan
- vm's attached to lan side of virtual pfsenses

tried with SDN and legacy linux bridges, same results, although, i noticed that vm failover is much faster with linux bridges. ( icmp test )
tried to swipe mikrotik switch with a Cisco Nexus, same results

one thing that still needs to be tested is the pfsense network interface type, i was using virtio, possible thing to consider
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!