How to get better performance with pfsense vm

naga-champa

Member
Aug 8, 2020
4
1
8
54
i've seen many posts regarding this topic but thought i'd add one more about pfsense performance under prox.

I'm new to proxmox but I think it's been really great solution so far. I had both pfsense and opnsense running to compare the 2. I was super excited when i fired up the vm and ran iperf from a host on my net - almost 940mb/s - i was blown away. Then it started heading downhill - when i tried to route through the vm - pings were good, udp was good but tpp -> no good. After beating my head against the wall, changing drivers from virtio to e1000 and back again - i landed on the tcp offload issue. having it enabled (unchecked) i was getting great performance but broken routing. Disabling in pfsense fixes routing.

The best i perf I get is around 650-700mb/s. Running bare metal im consistently getting 940mb/s. While 650mb/s isnt terrible its sticking in my craw that initially saw wire speed in the first iperf test - so i think its possible but im missing something in my config. I've seen other's online seem to get wirespeed under xen/kvm.

my suspicion is i need to disable tcp offload, tso, gso on the proxmox host - havent tried it yet because i blew away prox and loaded pfsense.

What am I missing?

i don't have a super beefy box - protectli 4 core celeron, 8gb ram, 120gb ssd. it had both firewalls running and a centos vm running at the same time - plenty of ram/cpu cycles to spare. Really want to make prox work.

Thanks.
 

Attachments

  • Bare metal.png
    Bare metal.png
    136.2 KB · Views: 259
  • prox-vm-pfsense.png
    prox-vm-pfsense.png
    43.1 KB · Views: 249
Hi,

if you disable offloading the checksum must be generated by the CPU.
So the speed of the network depends on the clock speed of the CPU.
you can use PCIe NIC passthrough [1] to enable the offloading and relieve the CPU.


1.) https://pve.proxmox.com/wiki/Pci_passthrough
 
Hi,
which packet size do you use for iperf ?

Generally, the limit (with beefy cpu 3ghz), is around 500-700k pps for 1 vm core. (I can reach 20gbit/s with big packets like 2M for example, but some mbits/s with synflood with 64bytes packets)
by default it's only use 1 core (1queue).

you can also enable virtio multiqueues vm nics, and use iperf with mutliple stream -P, to use more cores.

(Note sure how pfsense handle multiqueues, virtio-nic under linux last kernel autotune the queues in guest, but maybe pfsense need some tuning)
 
Hi,

if you disable offloading the checksum must be generated by the CPU.
So the speed of the network depends on the clock speed of the CPU.
you can use PCIe NIC passthrough [1] to enable the offloading and relieve the CPU.


1.) https://pve.proxmox.com/wiki/Pci_passthrough
thanks - i haven't tried the nic passthru option yet - was considering that option. there seems to be a bug in freebsd with that checksum option. if i let the hw do the checksum in pfsense - when i do an iperf directly to the fw i get full 940gb - but routing tcp is is mangled. there is a bug filed in freebsd somewhere i saw and it seems to persist in latest version of opnsense and pfsense - i get the same exact behavior.

i will try passthru - although my preference was to not lose 2 interfaces from use from the host - but it may be better for security in the end. is this the only and best option?

thanks for the reply.
 
Hi,
which packet size do you use for iperf ?

Generally, the limit (with beefy cpu 3ghz), is around 500-700k pps for 1 vm core. (I can reach 20gbit/s with big packets like 2M for example, but some mbits/s with synflood with 64bytes packets)
by default it's only use 1 core (1queue).

you can also enable virtio multiqueues vm nics, and use iperf with mutliple stream -P, to use more cores.

(Note sure how pfsense handle multiqueues, virtio-nic under linux last kernel autotune the queues in guest, but maybe pfsense need some tuning)

i was just using default iperf setting - just straight blast. Just as a note too - when i try to use the e1000 drivers the performance drops considerably to about 120mb/s or so. virtio drivers perform much better.
 
i was just using default iperf setting - just straight blast. Just as a note too - when i try to use the e1000 drivers the performance drops considerably to about 120mb/s or so. virtio drivers perform much better.

yes, this is expected. virtio-nic offload through vhost-net host kernel process.

could be interesting to see "vhost-<vmpid>" process usage on proxmox host when you are running iperf.
 
I'm having the same problem.

after virtualizing pfsense i've noticed the speeds have gone down from 950 Mbits/sec to 450 ~ 650 Mbits/sec. this happens with proxmox running only 1 vm (pfsense)
 
same problem here

after setup pfsense, speeds have gone down from 950 Mbits/sec to 380


better to use the official PVE firewall (from the GUI), instead of counter-productive pfsense
it is known that full VM (like pfsense require) hurts performance comparing to lightweight LXC containers
 
Last edited:
FW4b + proxmox (and tested ESXi)
Performance went from 960mbps down to 250mbps
using virtio nic and disabled hardware offloading.

Since FW4b uses J3160 which does not have VX-d, I cannot pass through the nic to the vm... Might just end up reverting back or getting a box that can pass through the nic.

Do others have ways to get around this performance issue?
 
Sharing my experience and recent learnings:

I found better experience using virtio as driver support in linux is better versus FreeBSD's poor network card implementation. It is why you will see different and confusing anecdotal user experiences across the internet. Passthrough is obviously ideal if the driver support is robust in Freebsd/pfSense but I find Linux is just reliable without worrying too much about the driver support. Using VirtIO allows seemless migrations without worrying about physical card hardware also.

You can dramatically improve performance by using multiqueue virtio driver settings but then you cant use ALTQ (QOS) support in pfSense. It seems to be one or the other at the moment. I find QOS to give much more reliable and consistent results but I have a relatively low speed connection, YMMV.
 
Hi,

I just migrated from pfsense on HW to proxmox with 6 NICs and i5 cpu.

virtio and hw-offloading disabled in pfsense vmon Proxmox (all current versions) on GB NICs

This is my performace from Hardware client to pfsense, there are also 2 dumb switches in between.


Code:
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec  1.10 GBytes   948 Mbits/sec                  sender
[  5]   0.00-10.01  sec  1.10 GBytes   946 Mbits/sec                  receiver

Same performance as bare metal.
 
  • Like
Reactions: jduekGuy

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!