OPNsense best practices?

Dunuin

Distinguished Member
Jun 30, 2020
14,793
4,614
258
Germany
Hi,

I want to setup two OPNsense-VMs in HA mode so they use pfsync to communicate and if one VM fails the other one should do the job. The idea is that when one of the server got problems and needs to be repaired my home network wouldn't be offline for days or weeks.

I only got 1 Proxmox server, 1 FreeNAS server and a managed Switch. Both servers got 2x Gbit Ethernet + 1x 10G SFP+ and I was able to setup tagged VLAN so both servers are using 7 VLANs over that single 10G NIC. I've setup 7 bridges on each server each one connected to a different VLAN. I've read that using virtual NICs as WAN port isn't the best idea and it is more secure to passthrough a physical NIC to the VM and use that as WAN. I already passthroughed one of the Gbit NICs to the VM on the proxmox server and want to do that on the Freenas server too.
My first idea was to add 7 virtio NICs, one for each bridge/VLAN, to the OPNsense-VM but then I asked myself if that is the best way. As far as I know OPNsense also supports using VLANs. Would it be more performant/secure to just use 1 virtio NIC connected to a VLAN capable bridge handling 7 tagged VLANs or is it better to create 7 virtio NICs where I don't need to handle VLANs inside the VM because the bridge behind the virtio NIC is already doing the tagging?

Another thing I don't unterstand is when to enable or disable hardware offloading and where to disable it.
If I PCI passthrough a single port of a multiport NIC to a Opensense-VM, do I need to disable hardware offloading on the host or inside the VM?
If I use a virtio NICs not connected to a physical NIC should I disable hardware offloading?
And what is with my 10G NIC? I use it on the host itself but also virtio NICs are connected to a bridge connected to that NIC. Do I also need to disable hardware offloading on the host if VMs are using that NIC over virtual virtio NICs too?
 
It doesn't matter if you have a physical interface or use vlans for WAN.


I would install pve on the freenas server and setup a truenas vm with pcie passtrough using a hba or raid controller with it firmware.

You can then also setup a pve cluster with a raspberry pi as qdevice to migrate vm's for maintainance.


Setup one opnsense instance on each pve, make sure to begin with the carp setup otherwise you have to redo everything.

Use pcie passtrough for nics if you want hardware offloading, otherwise disable hardware offloading in opnsense under
Code:
# Interfaces -> Settings: Check "Disable CRC, TSO and LRO hardware offload" and "Disable VLAN Hardware Filtering".

I did not go with nic passtrough, benchmarks made no difference.

Either way manage vlans inside opnsense it's easier, you need 1 bridge for all your vlans and another dedicated bridge for carp/pfsync.


My setup is as follows:
2x1G trunk for all vlans
1x1G for corosync, carp, pfsync
1x10G for storage replication PVE1 <-> PVE2
 
  • Like
Reactions: Dunuin
It doesn't matter if you have a physical interface or use vlans for WAN.
So using a untagged port on the switch with a dedicated VLAN for WAN is fine? And I could send that WAN VLAN from the switch over the tagged port to the servers and use virtio NICs with that WAN VLAN for OPNsense too?
Couldn't that be problematic because all WAN traffic is routed through a bridge on the host?
I would install pve on the freenas server and setup a truenas vm with pcie passtrough using a hba or raid controller with it firmware.

You can then also setup a pve cluster with a raspberry pi as qdevice to migrate vm's for maintainance.
I also thought about that but the FreeNAS server is just a quad core Xeon E3 with already maxed out RAM (32GB) and the Mainboard only supports 2x PCIe 3.0 8x + 1x PCIe 2.0 4x. If I would want to virtualize FreeNAS I would need to buy a second HBA (so both HBAs are using the 8x PCI slots) and the ConnectX3-NIC (PCIe 3.0 4x interface) would only got that PCIe 2.0 4x slot. In theory that should be fine (PCIe 2.0 4x should be able to transfer 4x5Gbit so even with PCIe Ovrhead that should be 10Gbit). But if I need atleast 16GB RAM for the FreeNAS ZFS and 4 to 8GB RAM for Proxmox ZFS, there would not be much RAM left to run some VMs. Right now my proxmox server got all RAM slots filled and around 80% of that 64GB are already used. So if the Proxmox server would fail the Freenas server wouldn't be capable of running most of the VMs anyway.
Either way manage vlans inside opnsense it's easier, you need 1 bridge for all your vlans and another dedicated bridge for carp/pfsync.
Why are use using a dedicated NIC for carp/pfsync? Only because traffic on the VLANs can't slow down the network critical carp/pfsync?
 
So using a untagged port on the switch with a dedicated VLAN for WAN is fine? And I could send that WAN VLAN from the switch over the tagged port to the servers and use virtio NICs with that WAN VLAN for OPNsense too?
That's how it's done.

Couldn't that be problematic because all WAN traffic is routed through a bridge on the host?
By that definition all vlan tagged traffic would be bad, there is no difference between lan, dmz or wan traffic, etc.

That's what vlan encapsulation is for.

Why are use using a dedicated NIC for carp/pfsync? Only because traffic on the VLANs can't slow down the network critical carp/pfsync?

Mixing it with other vlan traffic would increase latency.
 
If I route WAN through the VLAN trunk (10G NIC) and pfsync through one of the 1Gbit NICs I would have one Gbit NIC unused.
Is it possible to use that Gbit NIC as failover for the 10G NIC?

My Switch is supporting static or LACP trunking. And STP, RSTP or MSTP. For Load Balancing I can choose between "Source/Destination MAC", "Source/Destination MAC-IP" and "Source/Destination MAC-IP-TCP/UDP Port".

If I understand it right I could use MSTP without a bond but linux bridges only support STP and STP can't handle VLANs so no VLAN tagging.
I've read that LACP bonding 10G + 1G isn't a good idea because it can slow down the connection if it tries to balance the load between the two NICs and one of them is way slower.

In the Proxmox I saw this for the linux bond:
Active-backup (active-backup): Only one NIC slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The single logical bonded interface’s MAC address is externally visible on only one NIC (port) to avoid distortion in the network switch. This mode provides fault tolerance.
That sounds like what I want. Only use that 10G NIC and switch to Gbit if the 10G link fails. But how is that working with the switch? Do I need to configure a bond on the switch at all or is it enough to just use the same vlan tagging configuration for both ports on the switch because only one is used at the time?

What would be the best way to do this?
 
Last edited:
I want to setup two OPNsense-VMs in HA mode so they use pfsync to communicate and if one VM fails the other one should do the job

Hi,

HA for one VM have nothing to to with pfsync or any other software that run in a VM. As name suggest is only relate with a VM.... or more exactly with any VM(so for any OS).
..... and if both VM can not communicate? pfsync can do ANY job? Or maybe, I can ask, what it is pfsync job in your opinnion?
 
Hi,

HA for one VM have nothing to to with pfsync or any other software that run in a VM. As name suggest is only relate with a VM.... or more exactly with any VM(so for any OS).
..... and if both VM can not communicate? pfsync can do ANY job? Or maybe, I can ask, what it is pfsync job in your opinnion?
I don't mean a HA of proxmox itself. OPNsense can run in HA mode even if you install it bare metal on two servers as long as the hardware is compatible and the interfaces are named identical. Both OPNsenses are running all the time in master-slave mode and share a single virtual IP and use pfsync and some other stuff to sync the configs, TCP package states and so on. If the master OPNsense fails the slave OPNsense will kick in within seconds and will use the same virtual IPs so for all other host nothing has changed. So as long as one of the two VMs is running everything should be fine and routing/firewalling works. I still got alot of singe point of failures (switch, internet connection, router I got from ISP, all other VMs) but that shouldn't be a big problem because it is only my home network. If the switch or ISP router fails I can order a new one and easily replace it. None of my VMs (except for OPNsense/Pi-hole) is really critical and I can live some days/weeks without them if the complete proxmox server fails. And I got 2 FreeNAS server with weekly replication so I wouldn't loose any data and proxmox VMs are backuped to both NAS so I can't loose much. And everything on all servers is raid1 or raid5.

I just was feared switching to OPNsense and VLANs without a OPNsense HA configuration because if I segment my LAN into 10 VLANs and the OPNsense isn't working anymore, everything is isolated and nothing will work anymore. And I wouldn't be able to temporarily replace the OPNsense with some normal cheap routers because they can't use VLAN or route between so much networks.
 
Last edited:
If I route WAN through the VLAN trunk (10G NIC) and pfsync through one of the 1Gbit NICs I would have one Gbit NIC unused.
Is it possible to use that Gbit NIC as failover for the 10G NIC?

My Switch is supporting static or LACP trunking. And STP, RSTP or MSTP. For Load Balancing I can choose between "Source/Destination MAC", "Source/Destination MAC-IP" and "Source/Destination MAC-IP-TCP/UDP Port".

If I understand it right I could use MSTP without a bond but linux bridges only support STP and STP can't handle VLANs so no VLAN tagging.
I've read that LACP bonding 10G + 1G isn't a good idea because it can slow down the connection if it tries to balance the load between the two NICs and one of them is way slower.

In the Proxmox I saw this for the linux bond:

That sounds like what I want. Only use that 10G NIC and switch to Gbit if the 10G link fails. But how is that working with the switch? Do I need to configure a bond on the switch at all or is it enough to just use the same vlan tagging configuration for both ports on the switch because only one is used at the time?

What would be the best way to do this?

Yeah use active-backup in that case.

active-backup without a lag works because if the kernel switches the interface it sends a new arp message to update the switch arp table entry.

If you can create a static lag on the switch without a load balancing option that would be easier, because it then automatically tags both ports with vlans if you set them.

Note that active-backup only works if the switch goes down, cable disconnects or nic dies. It wont stop packet loss, bad negotiation like 100mb due to faulty cable etc.

If your switch supports MLAG get a second one to remove your switch SPOF.

Also add it to the monitoring if it supports SNMP.
 
TCP package states and so on. If the master OPNsense fails the slave OPNsense will kick in within seconds and will use the same virtual IPs so for all other host nothing has changed. So as long as one of the two VMs is running everything should be fine and routing/firewalling works

HI,

This is true ONLY IF you do not have a split-brain scenario(each VM can not "see" the other VM, so each o them think he is the MASTER).
 
HI,

This is true ONLY IF you do not have a split-brain scenario(each VM can not "see" the other VM, so each o them think he is the MASTER).
I thought because of that a dedicated NICs with a direct links between them for syncing is recommended so they can always connect to each other with low latency. As far as I understand there is also only one IP other hosts can see and both OPNsenses share the same IP using CARP, so there never should be both OPNsenses active as master at the same time?
 
I thought because of that a dedicated NICs with a direct links between them for syncing is recommended so they can always connect to each other with low latency. As far as I understand there is also only one IP other hosts can see and both OPNsenses share the same IP using CARP, so there never should be both OPNsenses active as master at the same time?

Split brain doesn't matter for carp. It's a simple stand by fail over.

If a problem occurs on the master it always demotes and the backup takes over.

Carp runs on all interfaces, pfsync uses a dedicated nic to sync states and settings.
 
Split brain doesn't matter for carp. It's a simple stand by fail over.

If a problem occurs on the master it always demotes and the backup takes over.

Carp runs on all interfaces, pfsync uses a dedicated nic to sync states and settings.
Yes, so shouldn't be a problem right?

I tried to setup VLAN and bridges on the FreeNAS server and looks like FreeBSD bridges aren't vlan aware. So I created a "lagg0" bonding my 10G and 1G NIC in "failover" mode whats looks like "active-backup" here on Proxmox. I also created 10 vlan interfaces for my 10 VLANs assigned to that "lagg0" bond and created 10 bridges. One for each vlan interface. Would be so much easier if I just could use vlan aware bridge on FreeBSD so the OPNsense VM could use 1 interface and 1 bridge and not 10...

Someone in the FreeNAS forum said that it is no good idea bond two NICs that are using different drivers because that could cause problems. Someone knows if that is also true for linux bonds?

Edit:
I finally got OPNsense 21.1 running on FreeNAS with 11 virtio NICs. They are all called vtnet0-vtnet10 like the ones here on Proxmox and are attached to bridges with the right VLANs. So the interfaces should be the same on both VMs.
Next I will try to setup IPs and CARP for HA. Hope that works on FreeNAS too.
 
Last edited:
Split brain doesn't matter for carp. It's a simple stand by fail over.

If a problem occurs on the master it always demotes and the backup takes over.


Not totally true, because .... :

You have 2 VM/Servers that use ucarp and pfsync. Now you have a bad luck, and you have a network problem(each carp/ucarp can not communicate) and the both VM/Servers will start the VIP IP. So some clients will use VIP on VM/SERVER A and others will use the B. And
in this case if you have some application like a DB(on A and B with some kind of replication) or a FileSystem, then on A and B you will have
different data/views.

On the routing/firewall level only could be OK(in most cases), but on others scenarios, could be very BAD !!!!

Good luck /Bafta !
 
What would be a normal speed I could achieve with a 10G SFP+ NIC?

If I connect both hosts with ConnectX3 directly via DAC or fibre I get around 9Gbit. If I connect both hosts over the switch with failover bond I get around 7.5G. If I start a iperf from a linux VM (virtio) to the other bare metal server I get around 5.5Gbit and TCP packets get lost and need to be resend.

I will try to optimize it but would be nice to know what a reasonable target should be.

What could cause the packet loss?
Should I disable hardware offloading on the host and/or inside the VM if I use the same NIC for host and VMs?

How exactly are MTU working?
Right know both NICs are set to 1500 MTU (but both 10G NICs and Switch would support 9000 MTU) because some SMB shares inside VMs got unstable as I increase the MTU to 9000.
What happens if the NICs, bridges and vlan devices are set to 9000 and the VMs or other devices in the network are set to 1500. Will they negotiate a common packet size? I would think sending packets with a size of 1500 would be no problem over interfaces that are set to 9000 MTU. But what happens if my host with 9000 MTU tries to send a packet to my Wifi-Router with only 1500 MTU? Could that cause problems or will they negotiate a common packet size for everything between the two and a packetsize of 1500 will be used?
And do I need to lower the MTU to 8996 for the vlan interfaces because the tagging will add 4 bytes?
 
Last edited:
@H4R0:
I now installed OPNsense on both servers and used this guide to setup HA: https://www.thomas-krenn.com/de/wiki/OPNsense_HA_Cluster_einrichten

First I thought it is working (because Internet is working if I shutdown any single VM) but if I look at my ISPs router it looks strange...

Firewall 1 is using:
DMZ 192.168.42.2
LAN 192.168.43.2
PFSYNC 192.168.4.2
WAN 192.168.0.2

Firewall 2 is using:
DMZ 192.168.42.3
LAN 192.168.43.3
PFSYNC 192.168.4.3
WAN 192.168.0.3

Firewall 1s CARP dashboard plugin shows me this:
WAN@1 MASTER 192.168.0.4
LAN@3 MASTER 192.168.43.1
DMZ@5 MASTER 192.168.42.1

Firewall 2s CARP dashboard plugin shows me this:
WAN@1 MASTER 192.168.0.4
LAN@3 MASTER 192.168.43.1
DMZ@5 MASTER 192.168.42.1

My ISPs router (Fritzbox) IP is 192.168.0.1.

Pfsync is working and I can sync configs from firewall 1 to firewall 2.

What looks strange to me:

1.) Dashboards of both Firewalls are showing "MASTER" at the same time. Shoudn't one be shown as SLAVE or something like that?

2.) If I look at my Fritzbox I always see that two hosts with the same IP (192.168.0.4) but different MACs are connected. But there is always only 192.168.0.2 OR 192.168.0.3 connected and both are using the identical MAC. Even if both OPNsense VMs are running.
If I shutdown one VM 192.168.0.2 switches to 192.168.0.3 and if I'm starting the VM again and shutdown the other VM it switches back from 192.168.0.3 to 192.168.0.2.

I thought the idea was that firewall 1 is always connected with 192.168.0.2 and a unique MAC, firewall 2 always connected with 192.168.0.3 and a unique mac and that there should be only one host with 192.168.0.4 (the virtual IP) connected at the same time. And that 192.168.0.4 is pointing to the master whoever that might be. So both VMs should share the same IP 192.168.0.4 and MAC but only one of them at the time.

3.) If I ping google.de I get this:
Code:
--- google.de ping statistics ---
7 packets transmitted, 7 received, +2 duplicates, 0% packet loss, time 257ms
rtt min/avg/max/mdev = 5.099/5.221/5.384/0.103 ms
I never saw before that I recieve duplicates. I thought maybe both VMs are running in parallel as master and because of that I receive duplicate answers?
If I shutdown one of the two VMs ping shows normal results without duplicates.

Do you know what could went wrong?
I already double checked my config and the tutorial but I don't see what I could have done different.
 
Last edited:
Yeah the second firewall should show "BACKUP" on the dashboard.

Can you double check with the official documentation https://docs.opnsense.org/manual/how-tos/carp.html
I already did this. Wasn't able to see anything I missed.
Did you create firewall rules to allow carp on all interfaces ? You can use floating to make it simpler.
I created a CARP rule for every interface like this:
carp.png
And there is a auto generated floating CARP rule:
carpfloat.png
Maybe post screenshots of everything related.
Anything specific that could help?

Here are some:

Dashboard on VM 1:
dashboard1.png

Dashboard on VM 2:
dashboard2.png

Virtual IP Settings on VM 1:
vip_settings1.png

Virtual IP Settings on VM 2:vip_settings2.png

Virtual IP Status on VM 1:
vip_status1.png

Virtual IP Status on VM 2:
vip_status2.png

Outbound NAT on VM1:
outbound_nat1.png

Outbound NAT on VM2:
outbound_nat2.png
 
LAN FW rules on VM1 (same on VM2):
lan1.png

WAN FW rules on VM1 (same on VM2):
wan1.png

DMZ FW rules on VM1 (same on VM2):
dmz1.png

PFSYNC FW rules on VM1 (same on VM2):
pfsync1.png

Floating FW rules on VM1 (same on VM2):
floating1.png

My ISP Router:
fritzbox.png

Log VM1:
log1.png

Log VM2:
log2.png

The primary OPNsense should be VM1. VM1 is where I change stuff and sync it to VM2.

The strange thing is that both VMs want to use the same MAC for the non-virtual WAN IPs. The virtio NICs got different MACs so CARP is somehow spoofing the MAC of the non-virtual IPs instead of the virtual IP.

And I don't get why both OPNsenses are reporting that they are MASTER at the same time if syncing configurations from VM1 to VM2 is working fine.
 
Last edited:
The strange thing is that both VMs want to use the same MAC for the non-virtual WAN IPs. The virtio NICs got different MACs so CARP is somehow spoofing the MAC of the non-virtual IPs instead of the virtual IP.

You can override mac in interface settings, maybe that helps.

Since both firewalls are master there must be a issue with the carp multicast traffic.

Can you login to the opnsense shell (setup serial console or ssh)

And post the output of "ifconfig vtnet0" (remove public ipv6 if needed)
 
Last edited:
vtnet0 is my PFSYNC interface:
ssh.png

vtnet1 is my WAN interface:
ssh2.png


In the ISPs router I see the hosts:
Code:
192.168.0.4 with FE:41:DC:03:E2:67
192.168.0.4 with 00:A0:98:6F:54:71
192.168.0.3 with 00:00:5E:00:01:01

If I get that right then the MAC 00:00:5E:00:01:01 should be used for the virtual IP. Sometimes if 192.168.0.3 disapears and 192.168.0.2 is shown it is also using 00:00:5E:00:01:01. I've read that the virtual IDs MAC with "vhid 1" sould end with a "01". Vhid 3 with "03" and so on.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!