pfSense VM limited to 1GbE speed

Phiolin

Member
Oct 8, 2019
7
0
6
46
I seem to be missing something in my configuration.
Running on my pve server are a couple of LXCs and 2 VMs (1 for docker, another with pfSense as my VLAN/WAN router).

All the LXCs and both VMs are linked via the same Linux bridge on Proxmox.
I can get >60 Gbit/s via iperf3 between LXCs and also between the LXCs and the Docker VM.
However running iperf3 from LXC or Docker-VM against pfSense, I will only get the usual close-to-1GbE speeds like ~900 Mbit/s.

Here's an iperf3 run from an LXC container to the Docker-VM:
Code:
user@cws:~$ iperf3 -c 10.0.20.33
Connecting to host logs.srv.wo67.de, port 5201
[  5] local 10.0.20.33 port 43968 connected to 10.0.20.37 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  4.22 GBytes  36.2 Gbits/sec    0    245 KBytes     
[  5]   1.00-2.00   sec  4.57 GBytes  39.2 Gbits/sec    0    349 KBytes     
[  5]   2.00-3.00   sec  3.91 GBytes  33.6 Gbits/sec    0    242 KBytes     
[  5]   3.00-4.00   sec  4.25 GBytes  36.5 Gbits/sec    0    334 KBytes     
[  5]   4.00-5.00   sec  4.75 GBytes  40.8 Gbits/sec    0    250 KBytes     
[  5]   5.00-6.00   sec  4.56 GBytes  39.2 Gbits/sec    0    249 KBytes     
^C[  5]   6.00-6.61   sec  3.00 GBytes  42.3 Gbits/sec    0    491 KBytes     
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-6.61   sec  29.3 GBytes  38.0 Gbits/sec    0             sender
[  5]   0.00-6.61   sec  0.00 Bytes  0.00 bits/sec                  receiver
iperf3: interrupt - the client has terminated

As you can see this has no trouble reaching >1 Gbits/sec speed.
The Docker-VM is configured similarly to pfSense, i.e. a simple virtio ethernet interface on vmbr0, which is the bridge that connects all of the VMs and LXCs.
The pfSense has 3 ethernet interfaces in total:
- vmbr1: A Proxmox bridge dedicated to pfSense where the only other connection on the bridge is a physical ethernet interface connected to my switch (physical interface connection is 1 GbE only) (vtnet0)
- vmbr2: A Proxmox bridge dedicated to pfSense where the only other connection on the bridge is another physical ethernet interface connected directly to the DSL modem (1GbE) (vtnet1)
- vmbr0: The Proxmox bridge shared with all other containers and VMs (vtnet2)

These are exposed to pfSense in this order, hence vmbr0 is coming up as vtnet2 in pfSense.
I can see pfSense showing 10Gbase-T on all connected interfaces, which is expected. However still traffic going via the bridge is only limited to 1 Gbits/sec.

10.0.20.1 is the pfSense IP on vtnet2 which is on vmbr0 with all the other containers and VM.

user@cws:~$ iperf3 -c 10.0.20.1
Connecting to host 10.0.20.1, port 5201
[ 5] local 10.0.20.33 port 45254 connected to 10.0.20.1 port 5201
[ ID] Interval Transfer Bitrate Retr Condo
[ 5] 0.00-1.00 sec 102 MBytes 856 Mbits/sec 31 400 KBytes
[ 5] 1.00-2.00 sec 113 MBytes 951 Mbits/sec 3 385 KBytes
[ 5] 2.00-3.00 sec 112 MBytes 944 Mbits/sec 0 687 KBytes
[ 5] 3.00-4.00 sec 114 MBytes 954 Mbits/sec 0 894 KBytes
[ 5] 4.00-5.00 sec 115 MBytes 965 Mbits/sec 0 1.04 MBytes
[ 5] 5.00-6.00 sec 112 MBytes 944 Mbits/sec 0 1.18 MBytes
[ 5] 6.00-7.00 sec 114 MBytes 954 Mbits/sec 0 1.30 MBytes
[ 5] 7.00-8.00 sec 112 MBytes 944 Mbits/sec 891 788 KBytes
[ 5] 8.00-9.00 sec 104 MBytes 870 Mbits/sec 0 959 KBytes
[ 5] 9.00-10.00 sec 115 MBytes 965 Mbits/sec 0 1.09 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 1.09 GBytes 935 Mbits/sec 925 sender
[ 5] 0.00-10.02 sec 1.08 GBytes 929 Mbits/sec receiver
iperf Done.

Some config screenshots for clarity:

Config of Docker-VM:
docker_vm_config.png


Config of pfSense-VM:
pfsense_vm_config.png

pfSense VLAN config, showing that vtnet2 (=vmbr0) is attached to VLAN20.
pfsense_vlan_config.png

And VLAN 20 is set up as the "Servers" interface

pfsense_interface_config.png

And that is at least showing up as 10GBase-T on the pfSense Dashboard:
Bildschirmfoto 2020-02-24 um 07.46.16.png

What am I missing here? Why is the pfSense-VM seemingly limited to 1Gbits/sec, while all others on the same bridge are having no trouble putting through much better speeds?

I'm definitely not CPU limited and there's also nothing running on pfSense that would slow down traffic much (like Snort or Suricata).
 
I'm going to make a guess as to your router config here...

Is your PFSense router a separate device that has 1Gb links to your network infrastructure somewhere in the mix?

I see you're using VLANS as well.

What I think is happening is, when you're getting higher than 1GB when your network traffic does not need to traverse a 1Gb link between your PFSense router and your Proxmox systems. This happens when you have communication within your various VLANS rather than between VLANs.

Basically, I'm willing to put a whopping $0.25 that you're running into this issue...

Vlan10 to Vlan10 commiunication (or any VlanX to VlanX) runs MUCH faster than VlanX to VlanY communication because that traffic is 'virtually' on the same network and does NOT need to traverse your router in order to properly make it from Vlan10 host A to Vlan10 host B.

Vlan10 to Vlan70 needs to be routed, and unless you've setup VLAN routing in your switch(s) or on your proxmox host (is that even possible? ESXi can with an addon, but I've never tried with proxmox) that routing is happening on your PFSense router, which most likely has (single or multiple) 1GB interfaces into your network infrastructure.

Even if you are using LACP on your PFSense router to aggregate multiple 1Gb links into your network, you're still possibly limited to 1Gb per CONNECTION/IP/PORT due to how LACP works. For example, having 10 x 1Gb ports in an LAG does not guarantee 10Gb speeds, but it almost guarantee's that multiple clients could each individually reach up to 1Gb speeds.

There's no "easy" solution if this is the issue you're running across. 10Gb ports on a PFSense router is significantly more expensive than something like a Protectli appliance with 6 ports (which I use in my network). VLAN routing would help solve that issue, but your switches and/or proxmox server have to support that, and then that causes a host of other changes that need to be made (DHCP server issues being #1 in my experience).

I'm basically waiting for a protectli box with SFP+ interfaces to fix this bottleneck in my network. Until then I try to keep my high traffic items on the same VLAN (not an ideal solution but what can you do...) and my low traffic items (things that just use a centralized syslog, or don't need fast access to my NAS) get properly segregated into relevant VLANS.

If this does explain the issue you're seeing, I hope my explanation above helps.
 
I'm going to make a guess as to your router config here...

Is your PFSense router a separate device that has 1Gb links to your network infrastructure somewhere in the mix?

I see you're using VLANS as well.

What I think is happening is, when you're getting higher than 1GB when your network traffic does not need to traverse a 1Gb link between your PFSense router and your Proxmox systems. This happens when you have communication within your various VLANS rather than between VLANs.

Basically, I'm willing to put a whopping $0.25 that you're running into this issue...

Vlan10 to Vlan10 commiunication (or any VlanX to VlanX) runs MUCH faster than VlanX to VlanY communication because that traffic is 'virtually' on the same network and does NOT need to traverse your router in order to properly make it from Vlan10 host A to Vlan10 host B.

Vlan10 to Vlan70 needs to be routed, and unless you've setup VLAN routing in your switch(s) or on your proxmox host (is that even possible? ESXi can with an addon, but I've never tried with proxmox) that routing is happening on your PFSense router, which most likely has (single or multiple) 1GB interfaces into your network infrastructure.

Even if you are using LACP on your PFSense router to aggregate multiple 1Gb links into your network, you're still possibly limited to 1Gb per CONNECTION/IP/PORT due to how LACP works. For example, having 10 x 1Gb ports in an LAG does not guarantee 10Gb speeds, but it almost guarantee's that multiple clients could each individually reach up to 1Gb speeds.

There's no "easy" solution if this is the issue you're running across. 10Gb ports on a PFSense router is significantly more expensive than something like a Protectli appliance with 6 ports (which I use in my network). VLAN routing would help solve that issue, but your switches and/or proxmox server have to support that, and then that causes a host of other changes that need to be made (DHCP server issues being #1 in my experience).

I'm basically waiting for a protectli box with SFP+ interfaces to fix this bottleneck in my network. Until then I try to keep my high traffic items on the same VLAN (not an ideal solution but what can you do...) and my low traffic items (things that just use a centralized syslog, or don't need fast access to my NAS) get properly segregated into relevant VLANS.

If this does explain the issue you're seeing, I hope my explanation above helps.

Replying to my own above message. Your PFSense IS in a VM, but linked to 1Gb interfaces... Can you put both your PFSense router and your Proxmox system into something like librenms/observium to see if when you're hitting that 1Gb limitation, that traffic is going in/out of the physical 1Gb interfaces somehow?

I've still got my $.25 on the table here. =)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!