OPNSense VM Low Inter-VLAN throughput, SR-IOV the play?

BICEPS

Active Member
Aug 20, 2019
10
0
41
35
I am running an opensense VM in proxmox and while VM-VM throughput on the same VLAN can do ~32Gbps, routing across VLANs incur a 94% penalty dropping it to 2Gbps.

I understand this is a limitation of emulating a L3 switch through software in which these devices typically have tailor-made ASICs for routing packets. I have tried a) setting multiqueue to 4 b) hardware offloading is disabled by default c) cpu type is host and aes flag is added anyways just to be sure.

Currently I am assessing my options to see if I have any options in getting as close to 10G as I can.

  1. My current setup which suffers from the problems detailed above. i5 7400 maintains 2Gbps routing with 4 cores assigned to it at ~30-50% cpu utilization.
rich text editor image

2. Passthrough the NIC then route the traffic to the switch, then back to another NIC assigned by the bridge which the other VMs are connected to. This method presumably would leverage hardware on the NIC. This seems a bit clunky and wasteful having the traffic traverse the whole chain when the destination is on the same machine not to mention require 3 NICs.

rich text editor image

3) SR-IOV? I have read you can pass some of the virtual functions to the VM to get *native performance while allowing the host access to the actual NIC. If I am not mistaken, this would be a mix of option #1 and #2 where inter-vlan routing is not done by the CPU but being offloaded to the NIC?

I was able to create VFs on my x520s and passthrough was fine to my OpenSense VM after shuffling around the PCIe slots due to IOMMU groupings. However I am stuck in trying to bridge the VF to a bridge for the other VMs to use. I am trying to use it the same way as I would a linux bridge set to VLAN aware but from what it looks like I may have to take a different approach? While I have read a lot on SR-IOV, I admit I am having problems digesting all this info.

What are my options? Should I take another approach altogether?
 
Hi, interestingly, I looked into this topic just today. My setup is just the same as in scenario 1. Proxmox host (Xeon E-2246G), multiple VLANs managed by OPNSense in a VM, Containers and VMs on a bridge. I ran into similar numbers routing between VLANs. iperf 3 from 1 host to another through firewall was about 1,7 gbit/s. more connections did not lead to significantly more throughput.

ierf3 -c target -P1 -p 5201 -t 30
1740234991468.png
1740235014289.png

iperf3 -c target -P4 -p 5201 -t 30
1740234826036.png


Then I set the nic in opnsense to multiqueue 4 (because my VM has 4 cores assigned)

1740233536572.png

running iperf now, a) i get a solid 2 GBit/s with a single connection

1740233666283.png

and b) running iperf with 4 connections I end up with ~4,5 Gbit/s total throughput (at the cost of a way higher CPU utilization on the host though)

1740235459872.png

1740233751559.png

I think this is a pretty solid improvement.
 
Last edited:
I am running an opensense VM in proxmox and while VM-VM throughput on the same VLAN can do ~32Gbps, routing across VLANs incur a 94% penalty dropping it to 2Gbps.

I understand this is a limitation of emulating a L3 switch through software in which these devices typically have tailor-made ASICs for routing packets. I have tried a) setting multiqueue to 4 b) hardware offloading is disabled by default c) cpu type is host and aes flag is added anyways just to be sure.

Currently I am assessing my options to see if I have any options in getting as close to 10G as I can.

  1. My current setup which suffers from the problems detailed above. i5 7400 maintains 2Gbps routing with 4 cores assigned to it at ~30-50% cpu utilization.
rich text editor image

2. Passthrough the NIC then route the traffic to the switch, then back to another NIC assigned by the bridge which the other VMs are connected to. This method presumably would leverage hardware on the NIC. This seems a bit clunky and wasteful having the traffic traverse the whole chain when the destination is on the same machine not to mention require 3 NICs.

rich text editor image

3) SR-IOV? I have read you can pass some of the virtual functions to the VM to get *native performance while allowing the host access to the actual NIC. If I am not mistaken, this would be a mix of option #1 and #2 where inter-vlan routing is not done by the CPU but being offloaded to the NIC?

I was able to create VFs on my x520s and passthrough was fine to my OpenSense VM after shuffling around the PCIe slots due to IOMMU groupings. However I am stuck in trying to bridge the VF to a bridge for the other VMs to use. I am trying to use it the same way as I would a linux bridge set to VLAN aware but from what it looks like I may have to take a different approach? While I have read a lot on SR-IOV, I admit I am having problems digesting all this info.

What are my options? Should I take another approach altogether?

what are you trying to achieve?.

for SR-IOV, you basically create additional virtual PCI devices (VF) and assign each of them as you would do for PCI passthrough of a complete NIC, but this time using a virtual PCI device.

the host won't touch it, it's delegated to a VM do not bridging possible.

each VF can be assigned to any given VM, not to multiple VMs. All the traffic will go to the switch to reach any destination.