Hello,
I have a 25-Gbe embedded Intel E823-L controller which I'm trying to get as close to wirespeed as possible. When running iperf3 directly on the proxmox host I can hit full wirespeed. As soon I I try on a pfSense guest is when I run into problems.
The maximum I could hit using a VirtIO bridge was about 6.5 Gbps after disabling all the HW offloading.
So I switched to passing the interface through directly and downloading and compiling the intel ice drivers for FreeBSD/pfSense. I'm now able to hit 17 Gbps on the pfSense guest using iperf, still obviously not the full 25.
This is the dmesg output:
Anyone have suggestions and/or a good resource on tunable parameters for pfSense?
This stands out to me:
It's an embedded controller so can't be moved around and doesn't appear on the proxmox host anyway, so it must be happening on the passthrough. I tried toggling the "PCI-Express" checkbox when passing through the device but the only thing that changes is the first line becomes "PCIE Express Bus: Speed unknown Width unknown".
Any ideas?
I have a 25-Gbe embedded Intel E823-L controller which I'm trying to get as close to wirespeed as possible. When running iperf3 directly on the proxmox host I can hit full wirespeed. As soon I I try on a pfSense guest is when I run into problems.
The maximum I could hit using a VirtIO bridge was about 6.5 Gbps after disabling all the HW offloading.
So I switched to passing the interface through directly and downloading and compiling the intel ice drivers for FreeBSD/pfSense. I'm now able to hit 17 Gbps on the pfSense guest using iperf, still obviously not the full 25.
This is the dmesg output:
ice0: <Intel(R) Ethernet Connection E823-L for SFP - 0.28.1-k> mem 0xe8000000-0xefffffff,0xf0000000-0xf000ffff irq 16 at device 0.0 on pci1
ice0: Loading the iflib ice driver
ice0: DDP package already present on device: ICE OS Default Package version 1.3.30.0, track id 0xc0000001.
ice0: fw 5.5.17 api 1.7 nvm 2.28 etid 80011e36 netlist 0.1.7000-1.25.0.f083a9d5 oem 1.3200.0
ice0: Using 4 Tx and Rx queues
ice0: Using MSI-X interrupts with 5 vectors
ice0: Using 1024 TX descriptors and 1024 RX descriptors
ice0: PCI Express Bus: Speed 2.5GT/s Width x1
ice0: PCI-Express bandwidth available for this device may be insufficient for optimal performance.
ice0: Please move the device to a different PCI-e link with more lanes and/or higher transfer rate.
ice0: Firmware LLDP agent disabled
ice0: link state changed to UP
ice0: Link is up, 25 Gbps Full Duplex, Requested FEC: RS-FEC, Negotiated FEC: RS-FEC, Autoneg: False, Flow Control: None
ice0: netmap queues/slots: TX 4/1024, RX 4/1024
ice0: link state changed to DOWN
ice0: link state changed to UP
ice0: Link is up, 25 Gbps Full Duplex, Requested FEC: RS-FEC, Negotiated FEC: RS-FEC, Autoneg: False, Flow Control: None
ice0: link state changed to DOWN
ice0: link state changed to UP
ice0: Link is up, 25 Gbps Full Duplex, Requested FEC: RS-FEC, Negotiated FEC: RS-FEC, Autoneg: False, Flow Control: None
Anyone have suggestions and/or a good resource on tunable parameters for pfSense?
This stands out to me:
ice0: PCI Express Bus: Speed 2.5GT/s Width x1
ice0: PCI-Express bandwidth available for this device may be insufficient for optimal performance.
ice0: Please move the device to a different PCI-e link with more lanes and/or higher transfer rate
It's an embedded controller so can't be moved around and doesn't appear on the proxmox host anyway, so it must be happening on the passthrough. I tried toggling the "PCI-Express" checkbox when passing through the device but the only thing that changes is the first line becomes "PCIE Express Bus: Speed unknown Width unknown".
Any ideas?