Hello,
I have a virtualized opnsense router and can't seem to manage to get decent performance while routing packets between vlans.
On PvE I defined vmbr0
Now I pass vmbr0 to my opnsenseVM as virtio, it extracts vtnet0_vlan2 and vtnet0_vlan3 properly, serves DHCP properly, and routes traffic between the vlans according to fw rules.
For testing I use an LXC attached to vmbr0 using vlan tag 3, and the PvE host itself attached to vmbr2 as follows
I have in opnsense the settings to disable everything: CRC offloading, TSO, LRO and VLAN offloading as well.
All CPU monitoring I can do show that during an iperf3 across vlans there is ample idle time on all CPU (80%) on all 3 nodes involved (it's a homelab nothing else is stressing anything here)
And yet I get 800-900MB/s when crossing vlans...
On the same vlan I get 18-19GB/s
I also managed to get 12GB/s from one VLan to the router but that was only by enabling the CRC offloading in the opnsense virtual router... But enabling CRC offloads breaks inter-vlan communication, the same opnsense VM, no rules changes, CRC offloaded = 12GB/s in one VLan but no Vlan 2 to 3 communication possible, or CRC not offloaded and only 850MB/s...
I'm getting stuck...
The HW NIC behind bonds is an Intel I225V-rev04, it's alone in the bond, later it will be bonded with a gigabit real Tek in case I plug the cable in the wrong NIC
If you have any ideas as to how I should set it up to achieve>10GB/s between VMs and LXCs regardless of the VLAN I put them on, anything would be helpful here I think.
Thanks for the reading and thanks in advance for any idea!
I have a virtualized opnsense router and can't seem to manage to get decent performance while routing packets between vlans.
On PvE I defined vmbr0
Code:
auto vmbr0
iface vmbr0 inet manual
bridge-ports bond0
bridge-stp on
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 1-4094
pre-up ethtool -G bond0 rx 1024 tx 1024
pre-up ethtool -K bond0 tx off gso off
post-up ethtool -K vmbr0 tx off gso off
#Bridge All VLANs to SWITCH
Now I pass vmbr0 to my opnsenseVM as virtio, it extracts vtnet0_vlan2 and vtnet0_vlan3 properly, serves DHCP properly, and routes traffic between the vlans according to fw rules.
For testing I use an LXC attached to vmbr0 using vlan tag 3, and the PvE host itself attached to vmbr2 as follows
Code:
auto vmbr2
iface vmbr2 inet static
address 10.2.2.2/24
gateway 10.2.2.1
bridge-ports vmbr0.2
bridge-stp on
bridge-fd 0
post-up ip rule add from 10.2.2.0/24 table 2Vlan prio 1
post-up ip route add default via 10.2.2.1 dev vmbr2 table 2Vlan
post-up ip route add 10.2.2.0/24 dev vmbr2 table 2Vlan
pre-up ethtool -G vmbr0.2 rx 1024 tx 1024
pre-up ethtool -K vmbr0.2 tx off gso off
post-up ethtool -K vmbr2 tx off gso off
#VMs bridge
I have in opnsense the settings to disable everything: CRC offloading, TSO, LRO and VLAN offloading as well.
All CPU monitoring I can do show that during an iperf3 across vlans there is ample idle time on all CPU (80%) on all 3 nodes involved (it's a homelab nothing else is stressing anything here)
And yet I get 800-900MB/s when crossing vlans...
On the same vlan I get 18-19GB/s
I also managed to get 12GB/s from one VLan to the router but that was only by enabling the CRC offloading in the opnsense virtual router... But enabling CRC offloads breaks inter-vlan communication, the same opnsense VM, no rules changes, CRC offloaded = 12GB/s in one VLan but no Vlan 2 to 3 communication possible, or CRC not offloaded and only 850MB/s...
I'm getting stuck...
The HW NIC behind bonds is an Intel I225V-rev04, it's alone in the bond, later it will be bonded with a gigabit real Tek in case I plug the cable in the wrong NIC
If you have any ideas as to how I should set it up to achieve>10GB/s between VMs and LXCs regardless of the VLAN I put them on, anything would be helpful here I think.
Thanks for the reading and thanks in advance for any idea!
Last edited: