OpnSense on Proxmox with Virtio Drivers - A Success story

eminent

New Member
Jan 7, 2025
6
1
3
I’m not sure if this is the right place to post, but I wanted to share my experience after switching from Incus to Proxmox. I made the switch after reading about High Availability and live migrations, and I thought I’d try it out with my OPNsense setup.

At first, I saw a lot of advice saying not to use virtio drivers and to stick with PCI passthrough instead. However, on my consumer hardware, trying to pass through my Intel 540 NICs always grabbed both ports, which was a problem. After a lot of trial and error, I gave up on PCI passthrough and SR-IOV.

I decided to give virtio a shot even though I saw a lot of people say it’s not great. Unfortunately, it really hurt my network performance. My 1 Gbps download and 50 Mbps upload dropped to around 150 Mbps down and 40 Mbps up. My kids weren’t happy with that, to say the least!

After trying different setups, I finally got things working by using Open vSwitch (OVS) for bridges. My WAN is bridged to an Intel 226 NIC, and my internal bridge uses OVS bonded across two Intel 540-BT2 ports. Now, during off-peak times like 4 AM, I’m able to hit my ISP’s speed limit—and sometimes even higher, though I’m not sure how accurate that is. During busy times when my kids are streaming, things are still pretty good as you can see in my screenshots.

The only problem I’m still dealing with is bufferbloat. OPNsense’s traffic shaping doesn’t fully work with virtio, but I think setting the queues to 6 might help. I’ll be testing that out soon.

I’m thinking about writing a guide on this because I’m excited it worked, especially after reading so many negative things about virtio performance. For now, I just wanted to share the good news.

1736208030703.png

1736208071503.png
Code:
root@midgard:~# qm config 100
agent: 1
bios: ovmf
boot: order=ide0
cores: 6
cpu: x86-64-v2-AES,flags=+aes
efidisk0: vms:vm-100-disk-0,efitype=4m,size=1M
ide0: vms:vm-100-disk-1,size=32G
machine: q35
memory: 8048
meta: creation-qemu=9.0.2,ctime=1735731223
name: OPNSENSE
net0: virtio=BC:24:11:2D:BA:DD,bridge=vmbr2,queues=8
net1: virtio=BC:24:11:7C:27:A2,bridge=vmbr1,queues=6
numa: 0
onboot: 1
ostype: other
scsihw: virtio-scsi-single
smbios1: uuid=06768059-2e63-4ee6-b4a3-3dc40e0958b4
sockets: 1
tags: opnsense;prod
vmgenid: fa2a3330-ad42-48a1-b7f5-11179d1ba82d
 
Glad you got everything working :) Now there are a few things I've found out about Open vSwitch. You can do a lot of tweaks once you get Open vSwitch up and running.

I use an Elite Mini 800 G9 that has an Intel Core i9, 64GB DDR5 and 2 x 2.5GbE Intel NICs. I have xFinity Internet and am paying for 2Gbps speeds. After applying some tweaks, I am getting 3.1Gbps... not too sure how that works but thats what fast-cli tells me in Proxmox.
1736217645314.png

This is in the UniFi Console (Debian) VM:
1736219242738.png

I am using an Intel I226 2.5GbE NIC for WAN and an Intel I225 2.5GbE NIC for LAN. I only run 3 VMs - pfSense, UniFi Console, and TrueNAS.

I am currenly on my laptop, but this is my Bufferbloat, latency, and speed test:

1736218416273.png


You will need to look up specifics of your NIC and what you can adjust to as these have been specified for the I226 and I225 NICs I am using in my Proxmox install.

Things I have tweaked are:

Increase Buffer Sizes - Add to /etc/sysctl.conf :
Code:
# Increase buffer sizes for high-throughput workloads
net.core.rmem_max=4194304
net.core.wmem_max=4194304
net.core.rmem_default=262144
net.core.wmem_default=262144
net.ipv4.tcp_rmem=4096 87380 4194304
net.ipv4.tcp_wmem=4096 65536 4194304
net.core.netdev_max_backlog=5000
net.ipv4.tcp_congestion_control=bbr

These settings optimize buffer sizes for high-throughput and low-latency connections, ensuring the NICs can handle the 2.5GbE speed efficiently without packet loss. BBR is designed for high-speed, low-latency networks. BRR adjusts the TCP window dynamically for optimal throughput.

Adjust OVS Parameters - Commands to optimize OVS:

Code:
ovs-vsctl set Open_vSwitch . other_config:max-idle=30000
ovs-vsctl set Open_vSwitch . other_config:flow-restore-wait=true

These settings fine-tune OVS for better performance and stability in high-speed environments.
NIC-Specific Tuning - Adjusts interrupt coalescing for high-speed transfers:
Code:
ethtool -C enp4s0 rx-usecs 50 tx-usecs 50
ethtool -C enp3s0 rx-usecs 50 tx-usecs 50

Reducing interrupt rates balances CPU utilization and throughput.

Enable Hardware Offload in OVS - For both I225 and I226 NICs:
Code:
ovs-vsctl set Open_vSwitch . other_config:hw-offload=true

Hardware offload reduces CPU load by leveraging NIC capabilities for OVS-related tasks.
 
Last edited:
Have you tried e1000 virtual network interface instead of virtio?
I have few virtualized firewalls deployment based on pfSense. Since OPNsense is a fork of pfSense, they should behave similarly. I avoid using openvswitch bridge for perimeter or edge virtual firewall. The reason simply openvswitch package updates. When Proxmox is updated and there is openvswitch update available, it can disconnect the WAN connection. So by keeping it simple linux standard bridge and e1000 vNIC, you can ensure that you are not losing remote connection.
This probably does not apply to your situation since it is a home setup and the firewall is probably within your reach. But for an enterprise setup you do not want to firewall connection to go down.
 
  • Like
Reactions: rtorres
Have you tried e1000 virtual network interface instead of virtio?
I have few virtualized firewalls deployment based on pfSense. Since OPNsense is a fork of pfSense, they should behave similarly. I avoid using openvswitch bridge for perimeter or edge virtual firewall. The reason simply openvswitch package updates. When Proxmox is updated and there is openvswitch update available, it can disconnect the WAN connection. So by keeping it simple linux standard bridge and e1000 vNIC, you can ensure that you are not losing remote connection.
This probably does not apply to your situation since it is a home setup and the firewall is probably within your reach. But for an enterprise setup you do not want to firewall connection to go down.
Any downsides to using e1000? I see that the e1000 is a Gigabit NIC and wanted to see if that would limit my ISP speeds in pfSense (or any VM).

I actually heard it's best to use VirtIO, but eh I hear a lot of things all the time. ;)
 
I see that the e1000 is a Gigabit NIC and wanted to see if that would limit my ISP speeds in pfSense (or any VM).
You are correct. Both e1000 and e1000e are limited to gigabit. So if the WAN bandwidth is over 1gbps, virtio certainly is the way to go.
My comment was more toward the use of Openvswitch bridge for a perimeter or edge virtual firewall, where the loss of connection can mean total outage even for remote management should the datacenter in one location and IT staff else where. Updating openvswitch package can reset the connection instead of a controlled intentional reboot. after an update.

Use of virtio indeed is highly recommended in a virtual environment. It does increase performance in almost all cases. My reasoning to use e1000 is strictly from the operational point of view of the environments in question. Some of these Proxmox environment I particularly referring to are very sensitive so use of e1000 is just to eliminate even a smallest possibility of something going wrong for the virtual firewall bridge and vNIC.

Other than the virtual firewall, virtio+openvswitch bridges are all around top to bottom.
 
Last edited:
I picked Virtio and OVS-VSCTL because I put a switch between my cable modem and the bridge port of 3 proxmox hosts.

Original idea was to do PCI passthrough, but i struggled with two issues
- First if i did an update and needed to reboot my Proxmox box, I would have to power down the vm and migrate it then power it up.
- Second, I couldnt get the performance with the linux bridge however i may be able to test it now with the current settings.

With this setup, I can migrate to any of the one of the three proxmox boxes and still keep the internet running.

I was also playing with VyOs where I had two routers with VRRP and it worked great, but i did not tune it for speed.
 
I was also playing with VyOs where I had two routers with VRRP and it worked great, but i did not tune it for speed.
Both OPNSense and pfSense support High Availability cluster, like VyOS. But I do believe, pfSense HA cluster is significantly simpler to configure and stable. For a virtual firewall setup, a cluster of 2 or 3 VM firewall should be the way to go. This eliminates the need to migrate anything during a node failure or reboot. But I would put the firewall VMs on local storage though and not on shared storage. That way each firewall in the cluster is on their own without possible single point of failure.
 
Both OPNSense and pfSense support High Availability cluster, like VyOS. But I do believe, pfSense HA cluster is significantly simpler to configure and stable. For a virtual firewall setup, a cluster of 2 or 3 VM firewall should be the way to go. This eliminates the need to migrate anything during a node failure or reboot. But I would put the firewall VMs on local storage though and not on shared storage. That way each firewall in the cluster is on their own without possible single point of failure.
Thank you, I did try out CARP, but from my understanding you need 3 Public Ip addresses to make it work.

I was able to get VRRP working by using the same MAC address on the wan and setting priorities, then using an internal Vlan between the 3 Proxmox Nodes just for the VMs to communicate for VRRP. It worked great.

When I tried for testing to do the same theory with OpnSense it did not work, but that could have been a configuration issue on my side.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!