[SOLVED] Proxmox and OPNsense - Network speed issue

BenjaMwaye

New Member
Sep 13, 2021
17
4
3
47
Hi everybody,

After some research on the web, I think there's no solution with OPNsense but I ask you because perhaps I've missed something.

My config :
  • Proxmox 7 (Debian 11)
  • OPNsense 21.7
  • Virtio network card (queue 8)
  • More than 1Gb/s web connexion
I'll do some speed test in different conditions and the results are that with OPNsense speed is very bad :
  • Box directly connected to my desktop Debian : 1250 Mb/s down - 700 Mb/s up.
  • Box in NAT connected to OPNsense (VM in Proxmox) : 550 Mb/s down - 330 Mb/s down.
  • Box in bridge mode with public IP in OPNsense : 240 Mb/s up and down.
Why debit is so "bad" with Proxmox/OPNsense?
And so different between NAT and Bridge on my box?
Could OPNsense and Proxmox be optimized to work great together?
It's there a possibility to install something like qemu-guest-agent in OPNsense to have a good network speed?

Thanks in advance for your help and advice.

Best regards,
Benjam
 
For best performance you should buy a dedicated PCIe NIC and use PCI passthrough so your OPNsense VM can directly and physically access the NIC without any virtualisation in between.
And in general you want to disable all hardware offloading features of your NIC (used by default in the OPNsense WebUI) and because of that your CPU needs to do all the calculation for every single packet. So you want a beefy CPU wirt high clock speed or your CPU can easily be the bottleneck.
And because of that I would suggest you set your VMs CPU type from the default "kvm64" to "host" because this will allow your VM to utilize all available instruction sets and your virtual CPU cores will be faster.

If you got 3 or more onboard NICs you can also try to PCI passthrough two of the onboard NICs. But that won't always work and heavily depends on the bios, how the mainboard is designed and on the NICs chipset.
 
Last edited:
  • Like
Reactions: vesalius
Easy things to start and test are as Dunuin stated.
1. Use host as the CPU in the OPNsense VM and consider setting the cpuunits = 2048 (will prioritize this vm over others using the default 1024)
2. disable all hardware offloading features of your NIC in the OPNsense webgui
3. opnsense-system-settings tunables, I set: vm.pmap.pti=0 and hw.ibrs_disable=1 https://docs.opnsense.org/troubleshooting/hardening.html hardening versus performance. Default OPNsense chose hardening, while default pfSense choose performance here.

Then Retest speed to internet and with iPerf within your lan. If those changes do not produce acceptable speed then IOMMU passthrough and/or SR-IOV are the next things you need to look into.
 
Last edited:
Hi,

Thanks for your advice.
I don't have time last weekend.
I'll do some tests with your proposed config and post here results.

Best regards,

Benjam
 
Hi Benjamin
Just for info, I did some test with OPNsense too some time ago because I wanted to replace a VM with Ipfire to use some more advanced features offered by OPNsense.
Unfortunately I had the same results as you, poor performance on network regarding the kind of the net card assigned to the vm (either virtio or e1000) and disabling the hw offloading options.
I did not tested the PCI passtrough because in the end I leaved the IPfire which allowed me to use the full gigabit internet connection withouth much problem

D.
 
Hi,

Thanks for your help and sorry for the time to answers .

@Dunuin and @vesalius : I'll try different speedtest on my Proxmox install and optimizations in OPNsense ; but problems comes from something in the Proxmox conf or install (results for download are not always reliable but for the up it seems to be good).


Tests :
If I test directly from the network card on the Proxmox results are quiet good (down 800-1000Mb/s up 550 Mb/s).
But if I test when using the bridge, the flow drops to (150-350 Mb/s up and down). Tested with the bridge attached to OPNsense or to a Debian VM or when try the bridge directly in the Proxmox... Globally same results.

But, if I boot my laptop Proxmox server with a Debian live, the speedtest results are very very good (1200-2000 Mb/s down and 650-700 Mb/s).

So, it's impossible to have a good network speed when use the bridge with my Proxmox conf and I don't know why.



The "solution" for my case :
I'll restart from scratch the installation of my Proxmox and OPNsense.
OPNsense is a VM in Proxmox with "processor" params to "host", 4Go RAM, 3 cores, network cards in bridge mode and VirtIO "driver" (for LAN and WAN), uefi - disk with writeback discard iothread ssd option ... (but params must perhaps be optimized or not the best choice).

And now the speedtest results are very very good all the time, even when using bridge in Proxmox (1200-2000 Mb/s down and 650-700 Mb/s).



@il_benny : I don't try IPFire because results are good now with OPNsense (and it have some extra features and a quiet good web interface). But thanks for the tips, I'll try it if flow to down again with OPNsense.


It looks to be solved :)
Thanks a lot for all the time passed to help me.

Best regards,
Benjam
 
Last edited:
Hi, I also encountered a similar problem. I use any linux virtual machine and use my PC for iperf3 test results around 950mbps. And the OPNsense virtual machine is only 350mbps
 
I had a similar issue. I created an OPNsense virtual machine in my PVE environment. I don't use PCI passthrough and use VMBR. The E1000 network card only had a speed of about 80Mbps, but switching to VirtIO increased it to about 500Mbps, which is half of 1Gbps. After adjusting the following settings in OPNsense, the problem was resolved, and the speed reached 900Mbps. I hope this can help everyone.

屏幕截图_20240602_001617.png
 
I had a similar issue. I created an OPNsense virtual machine in my PVE environment. I don't use PCI passthrough and use VMBR. The E1000 network card only had a speed of about 80Mbps, but switching to VirtIO increased it to about 500Mbps, which is half of 1Gbps. After adjusting the following settings in OPNsense, the problem was resolved, and the speed reached 900Mbps. I hope this can help everyone.

View attachment 69117
Hello zhihuiyuze,

I'm facing similar issues as you, low throughput in opnsense using virtio on vmbr.
But I'm unable di use your settings above as it breaks network connectivity for me...
To confirm, would you please kindly share your /etc/network/interfaces file here so I could try to understand how this worked for you ?
 
  • Like
Reactions: vl4di99
Hello zhihuiyuze,

I'm facing similar issues as you, low throughput in opnsense using virtio on vmbr.
But I'm unable di use your settings above as it breaks network connectivity for me...
To confirm, would you please kindly share your /etc/network/interfaces file here so I could try to understand how this worked for you ?
Agree with you. If those offloads are enabled, the entire internet in the LAN drops. But, on OPNsense itself, I am still able to ping google from the cli.
 
  • Like
Reactions: jauling
Thanks @vl4di99 for the confirmation. At least I know now there's no alternative for me but to wait for opnsense to improve virtio performance or switch to a hardware switch with NIC passthrough foregoing any Linux bridge benefits.
 
I don't think, that passthrough would help. I have tried to passthrough my NIC (Intel X722 1GbE) to the OPNSense VM, but the speeds remained the same like with the virtio. Also, tried to switch to Intel E1000 and other model, but it remained the same. I have then tested the speed of the NIC connected straight with dhcp in proxmox and got about the same results, like with OPNSense, no noticeable difference. What I configured on OPNSense hardware looks like below:

1736771911491.png

Also, something else what I have done was to enable NUMA nodes under opnsense. But this was needed, because the host has 2 physical CPUs. Else, it's not needed. I have also tweaked the bridges in proxmox like following (under /etc/network/interfaces):
1736772022678.png
I have also tested the LAN speeds using iperf3 and was getting 30Gbps between the host and a VM, 1Gbps between the host and a LAN pc. I have used also fast.com on the Proxmox host and got about 475Mbps, the ISP speed being capped to 500Mbps and it was the same like without OPNSense.
Tip: The multiqueues in Hardware need to be equal with the number of threads (in my case 2*4).
Let me know if your situation will improve.
 
Easy things to start and test are as Dunuin stated.
1. Use host as the CPU in the OPNsense VM and consider setting the cpuunits = 2048 (will prioritize this vm over others using the default 1024)
2. disable all hardware offloading features of your NIC in the OPNsense webgui
3. opnsense-system-settings tunables, I set: vm.pmap.pti=0 and hw.ibrs_disable=1 https://docs.opnsense.org/troubleshooting/hardening.html hardening versus performance. Default OPNsense chose hardening, while default pfSense choose performance here.

Then Retest speed to internet and with iPerf within your lan. If those changes do not produce acceptable speed then IOMMU passthrough and/or SR-IOV are the next things you need to look into.
I love you man !!!

This solved the issue I've been fighting against for the past 2 days....