Suricata limiting my network speed in pfSense

RouteThebyte

New Member
Jan 14, 2021
1
0
0
54
I am running the latest pfSense (2.4.5 p1)
I have a fiber gigabit connection to the internet and my nics are 1gb. When I install Suricata and turn it on It reduces my speeds to 280mb/s. That is a 72% drop in speed.
I have turned off the detection rules, changed the modes, and none of it changes much at all. As soon as I disable Suricata my speed goes back to 950/950.

Doing pfSense in Hyper-V I was getting the desired speeds. I really like Proxmox, but am considering getting another server to run pfSense bare metal to be able to run Suricata.

Hardware Checksum Offloading, Hardware TCP Segmentation Offloading, and Hardware Large Receive Offloading are all disabled in pfSense

I've included a screenshot of my configuration. The VM has plenty of resources. Please help!
 

Attachments

  • pfsense details.PNG
    pfsense details.PNG
    18 KB · Views: 34
Last edited:
Suricata needs fast single threaded CPU performance. Running kvm64 as CPU type like in the screenshot above might not be great for it, as this might heavily reduce the CPU performance, because of missing instruction sets.
 
Use q35 instead i440x. Also enable Miltiqueue on NIC settings.
Not sure if multiqueue would help. Suricata will only use one receiving queue anyway...at least when using IPS and not just IDS:
https://forum.opnsense.org/index.php?topic=27252.0
Note regarding IPS

When Suricata is running in IPS mode, Netmap is utilized to fetch packets off the line for inspection. By default, OPNsense has configured Suricata in such a way that the packet which has passed inspection will be re-injected into the host networking stack for routing/firewalling purposes. The current Suricata/Netmap implementation limits this re-injection to one thread only. Work is underway to address this issue since the new Netmap API (V14+) is now capable of increasing this thread count. Until then, no benefit is gained from RSS when using IPS.
 
Sure it helps ;)
No (outdated) Misinformation please.
We are running alot virtual OPNSense Appliances without any issues in our Datacenters.
 
what should be put in multiqueue? It's a number. I am really not familiar with this.
Is there a doc you recommend I go over to learn more?

I run all that on a little N5105 appliance by the way, with intel I225-V rev0.3 NIC chip.
 
Note that I don't have PCI passtru enable on that server yet, so all NIC devices are VirtIO for now.
 
Sure it helps ;)
No (outdated) Misinformation please.
We are running alot virtual OPNSense Appliances without any issues in our Datacenters.
Then nice that this got fixed the last year. Here I still never see more than 25% CPU usage with short peaks to 50% even when hitting the bandwidth as hard as I can with multiqueue=4 and 4 vCPUs with OPNsense 22.7 and IPS enabled. When running top it still shows that suricata isn't using more much more than 100% CPU time.
what should be put in multiqueue?
Mutliqueue should match the number von vCPUs you gave your VM.
 
Last edited:
Then nice that this got fixed the last year. Here I still never see more than 25% CPU usage with short peaks to 50% even when hitting the bandwidth as hard as I can with multiqueue=4 and 4 vCPUs with OPNsense 22.7 and IPS enabled. When running top it still shows that surucata isn't using more much more than 100% CPU time.

Mutliqueue should match the number von vCPUs you gave your VM.
Thanks! That sounds nice. What's your hardware?

So, in my case, if the VM has 3 NICs, all of them need that number set for multiqueue to 4 if my VM has 4 vCPU, right?
 
So, in my case, if the VM has 3 NICs, all of them need that number set for multiqueue to 4 if my VM has 4 vCPU, right?
As far as I understand yes.

Thanks! That sounds nice. What's your hardware?
16 Core Xeon E5 2683v4, 128GB RAM, 1x 10Gbit NIC, 6x 1Gbit NICs on the server with master OPNsense (4 vCPUs, 4GB RAM, suricata IPS enabled) and quadcore Atom J3710. 16GB RAM, 1x 1Gbit NIC on the thin client with the backup OPNsense (2 vCPUs, 2GB RAM and no suricata).

On the big server suricata works fine enough for my 100Mbit internet connection. On the thin-client it is totally unusable with suricata IPS or IDS. Had to disable suricata there because the latency and throughput were terrible. Really bad experience when you play some multiplayer games and then the connection drops for several seconds with no packets coming through.
 
what should be put in multiqueue? It's a number. I am really not familiar with this.
Is there a doc you recommend I go over to learn more?

I run all that on a little N5105 appliance by the way, with intel I225-V rev0.3 NIC chip.
Code:
Multiqueue
If you are using the VirtIO driver, you can optionally activate the Multiqueue option. This option allows the guest OS to process networking packets using multiple virtual CPUs, providing an increase in the total number of packets transferred.

When using the VirtIO driver with Proxmox VE, each NIC network queue is passed to the host kernel, where the queue will be processed by a kernel thread spawned by the vhost driver. With this option activated, it is possible to pass multiple network queues to the host kernel for each NIC.

When using Multiqueue, it is recommended to set it to a value equal to the number of Total Cores of your guest. You also need to set in the VM the number of multi-purpose channels on each VirtIO NIC with the ethtool command:

ethtool -L ens1 combined X

where X is the number of the number of vcpus of the VM.

You should note that setting the Multiqueue parameter to a value greater than one will increase the CPU load on the host and guest systems as the traffic increases. We recommend to set this option only when the VM has to process a great number of incoming connections, such as when the VM is running as a router, reverse proxy or a busy HTTP server doing long polling.
 
Then nice that this got fixed the last year. Here I still never see more than 25% CPU usage with short peaks to 50% even when hitting the bandwidth as hard as I can with multiqueue=4 and 4 vCPUs with OPNsense 22.7 and IPS enabled. When running top it still shows that suricata isn't using more much more than 100% CPU time.
Sure.
For example, one of our big OPNSense with 16GBit symmetrical connection, i can saturate all cores (big Xeon´s) evenly.
Dont misunderstand, 100% stands for one core, 200% for 2 cores and so one...
But it really depends on the real Workload (eg. amount of connections/type of traffic/amount of rules etc..)
Synthetic benchmarks not always tells the truth.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!