NIC PCI-e Passthrough Bottleneck on Guest pfSense

eracerxrs

New Member
Oct 27, 2022
7
2
3
Hello,

I have a 25-Gbe embedded Intel E823-L controller which I'm trying to get as close to wirespeed as possible. When running iperf3 directly on the proxmox host I can hit full wirespeed. As soon I I try on a pfSense guest is when I run into problems.

The maximum I could hit using a VirtIO bridge was about 6.5 Gbps after disabling all the HW offloading.
So I switched to passing the interface through directly and downloading and compiling the intel ice drivers for FreeBSD/pfSense. I'm now able to hit 17 Gbps on the pfSense guest using iperf, still obviously not the full 25.

This is the dmesg output:

ice0: <Intel(R) Ethernet Connection E823-L for SFP - 0.28.1-k> mem 0xe8000000-0xefffffff,0xf0000000-0xf000ffff irq 16 at device 0.0 on pci1 ice0: Loading the iflib ice driver ice0: DDP package already present on device: ICE OS Default Package version 1.3.30.0, track id 0xc0000001. ice0: fw 5.5.17 api 1.7 nvm 2.28 etid 80011e36 netlist 0.1.7000-1.25.0.f083a9d5 oem 1.3200.0 ice0: Using 4 Tx and Rx queues ice0: Using MSI-X interrupts with 5 vectors ice0: Using 1024 TX descriptors and 1024 RX descriptors ice0: PCI Express Bus: Speed 2.5GT/s Width x1 ice0: PCI-Express bandwidth available for this device may be insufficient for optimal performance. ice0: Please move the device to a different PCI-e link with more lanes and/or higher transfer rate. ice0: Firmware LLDP agent disabled ice0: link state changed to UP ice0: Link is up, 25 Gbps Full Duplex, Requested FEC: RS-FEC, Negotiated FEC: RS-FEC, Autoneg: False, Flow Control: None ice0: netmap queues/slots: TX 4/1024, RX 4/1024 ice0: link state changed to DOWN ice0: link state changed to UP ice0: Link is up, 25 Gbps Full Duplex, Requested FEC: RS-FEC, Negotiated FEC: RS-FEC, Autoneg: False, Flow Control: None ice0: link state changed to DOWN ice0: link state changed to UP ice0: Link is up, 25 Gbps Full Duplex, Requested FEC: RS-FEC, Negotiated FEC: RS-FEC, Autoneg: False, Flow Control: None

Anyone have suggestions and/or a good resource on tunable parameters for pfSense?

This stands out to me:

ice0: PCI Express Bus: Speed 2.5GT/s Width x1 ice0: PCI-Express bandwidth available for this device may be insufficient for optimal performance. ice0: Please move the device to a different PCI-e link with more lanes and/or higher transfer rate

It's an embedded controller so can't be moved around and doesn't appear on the proxmox host anyway, so it must be happening on the passthrough. I tried toggling the "PCI-Express" checkbox when passing through the device but the only thing that changes is the first line becomes "PCIE Express Bus: Speed unknown Width unknown".

Any ideas?
 
Hello,

I have a 25-Gbe embedded Intel E823-L controller which I'm trying to get as close to wirespeed as possible. When running iperf3 directly on the proxmox host I can hit full wirespeed. As soon I I try on a pfSense guest is when I run into problems.

The maximum I could hit using a VirtIO bridge was about 6.5 Gbps after disabling all the HW offloading.
So I switched to passing the interface through directly and downloading and compiling the intel ice drivers for FreeBSD/pfSense. I'm now able to hit 17 Gbps on the pfSense guest using iperf, still obviously not the full 25.

This is the dmesg output:

ice0: <Intel(R) Ethernet Connection E823-L for SFP - 0.28.1-k> mem 0xe8000000-0xefffffff,0xf0000000-0xf000ffff irq 16 at device 0.0 on pci1 ice0: Loading the iflib ice driver ice0: DDP package already present on device: ICE OS Default Package version 1.3.30.0, track id 0xc0000001. ice0: fw 5.5.17 api 1.7 nvm 2.28 etid 80011e36 netlist 0.1.7000-1.25.0.f083a9d5 oem 1.3200.0 ice0: Using 4 Tx and Rx queues ice0: Using MSI-X interrupts with 5 vectors ice0: Using 1024 TX descriptors and 1024 RX descriptors ice0: PCI Express Bus: Speed 2.5GT/s Width x1 ice0: PCI-Express bandwidth available for this device may be insufficient for optimal performance. ice0: Please move the device to a different PCI-e link with more lanes and/or higher transfer rate. ice0: Firmware LLDP agent disabled ice0: link state changed to UP ice0: Link is up, 25 Gbps Full Duplex, Requested FEC: RS-FEC, Negotiated FEC: RS-FEC, Autoneg: False, Flow Control: None ice0: netmap queues/slots: TX 4/1024, RX 4/1024 ice0: link state changed to DOWN ice0: link state changed to UP ice0: Link is up, 25 Gbps Full Duplex, Requested FEC: RS-FEC, Negotiated FEC: RS-FEC, Autoneg: False, Flow Control: None ice0: link state changed to DOWN ice0: link state changed to UP ice0: Link is up, 25 Gbps Full Duplex, Requested FEC: RS-FEC, Negotiated FEC: RS-FEC, Autoneg: False, Flow Control: None

Anyone have suggestions and/or a good resource on tunable parameters for pfSense?

This stands out to me:

ice0: PCI Express Bus: Speed 2.5GT/s Width x1 ice0: PCI-Express bandwidth available for this device may be insufficient for optimal performance. ice0: Please move the device to a different PCI-e link with more lanes and/or higher transfer rate

It's an embedded controller so can't be moved around and doesn't appear on the proxmox host anyway, so it must be happening on the passthrough. I tried toggling the "PCI-Express" checkbox when passing through the device but the only thing that changes is the first line becomes "PCIE Express Bus: Speed unknown Width unknown".

Any ideas?
What's your cpu consumption on the vm's overview page in Proxmox, when you hit 17gbps?
 
What's your cpu consumption on the vm's overview page in Proxmox, when you hit 17gbps?
That was it! Spot on old chap!

For reference:
4 cores of a Xeon D-1736NT were maxing out and hitting 17 Gbps
8 cores is now 70 percent utilization for 23.5 Gbps throughput
 
  • Like
Reactions: Ramalama
That was it! Spot on old chap!

For reference:
4 cores of a Xeon D-1736NT were maxing out and hitting 17 Gbps
8 cores is now 70 percent utilization for 23.5 Gbps throughput
You can tune it a bit, but all the options aren't that great tbh :)

But let me mention anyway:
- there are indeed some "tunable" tipps, but hell, i don't exactly know what they do and even those people on reddit/YouTube etc, don't actually know what they do...
However, i seen cases where the CPU utilization was halved at the same throughput.
I think the risk is high, that suddenly firewall rules won't work as expected and something like surricata which does packet inspection, won't work for sure (if you don't use it, then it's a no brainer anyway)

- increasing mtu. That helps for sure, but it's simple not applyable to most environments.
And helps anyway only in cases where the other device you communicate with has increased mtu either. But i don't need to mention this will be a big bottleneck when other devices have to split those gigantic packets again :)

- switching cpu to "host", but im pretty sure you did that already.
Probably not that big of a gain, but anyway there is no downside other as migration could break the vm, if the other pve host has an entirely different CPU.

- enabling hardware acceleration in pfsense.
I would simply enable all hardware acceleration, in my case this works perfectly fine (opnsense here)
But you have to reboot, after applying.
It could happen then even the GUI won't be accessible anymore without a reboot.
The only thing that you can disable is TCP offloading, if you're using some sort of packet inspection.
With normal firewall rules/port forwarding etc, every hardware acceleration works fine!

But i think you mentioned that you enabled anyway hardware acceleration.

However have fun and enjoy:)
 
Last edited:
Oh and i forgot to mention, pfsense plus is available as community edition for free.
It's usually newer as pfsense-ce, so i would prefer the plus edition.

Im using myself opnsense, but tbh, i think it's still the same.
Probably pfsense is lately more reliable even, since opnsense loosed some maintainers and some plugins aren't working anymore correctly.

Even wireguard has issues with complex configurations, the wg interface doesn't start on boot :)
However that's a no issue with a small monit config, but anyway opnsense is somewhat getting worse lately.
Still love it, but wanted to mention that pfsense plus could be the better choice nowadays.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!