Very low bandwidth on 10gb Intel 82599ES network card

bryan1000ch

Member
May 4, 2021
2
0
6
29
Suisse
Good morning,

I have a proxmox cluster with 3 nodes. 2 are connected with the Intel 10gb 82599ES card on a 10gb switch. When I do an iperf between the two nodes, I get 200mb/s. I specify that I use an active backup bond between the two 10g interfaces of the card. I saw on the web that there are problems with these cards. Do you have any feedback?


root@HYPM003:~# ethtool -k enp97s0f1
Features for enp97s0f1:
rx-checksumming: on
tx-checksumming: on
tx-checksum-ipv4: off [fixed]
tx-checksum-ip-generic: on
tx-checksum-ipv6: off [fixed]
tx-checksum-fcoe-crc: on [fixed]
tx-checksum-sctp: on
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
tx-tcp-segmentation: on
tx-tcp-ecn-segmentation: off [fixed]
tx-tcp-mangleid-segmentation: off
tx-tcp6-segmentation: on
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off
receive-hashing: on
highdma: on [fixed]
rx-vlan-filter: on
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: on [fixed]
tx-gre-segmentation: on
tx-gre-csum-segmentation: on
tx-ipxip4-segmentation: on
tx-ipxip6-segmentation: on
tx-udp_tnl-segmentation: on
tx-udp_tnl-csum-segmentation: on
tx-gso-partial: on
tx-tunnel-remcsum-segmentation: off [fixed]
tx-sctp-segmentation: off [fixed]
tx-esp-segmentation: on
tx-udp-segmentation: on
tx-gso-list: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off [fixed]
rx-fcs: off [fixed]
rx-all: off
tx-vlan-stag-hw-insert: off [fixed]
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: off [fixed]
l2-fwd-offload: off
hw-tc-offload: off
esp-hw-offload: on
esp-tx-csum-hw-offload: on
rx-udp_tunnel-port-offload: off [fixed]
tls-hw-tx-offload: off [fixed]
tls-hw-rx-offload: off [fixed]
rx-gro-hw: off [fixed]
tls-hw-record: off [fixed]
rx-gro-list: off
macsec-hw-offload: off [fixed]
rx-udp-gro-forwarding: off
hsr-tag-ins-offload: off [fixed]
hsr-tag-rm-offload: off [fixed]
hsr-fwd-offload: off [fixed]
hsr-dup-offload: off [fixed]



61:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
Subsystem: Intel Corporation Ethernet Server Adapter X520-2
Kernel driver in use: ixgbe
61:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
Subsystem: Intel Corporation Ethernet Server Adapter X520-2
Kernel driver in use: ixgbe

root@HYPM003:~# dmesg |grep ixgbe
[ 2.783055] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver
[ 2.783060] ixgbe: Copyright (c) 1999-2016 Intel Corporation.
[ 2.794128] ixgbe 0000:61:00.0: enabling device (0140 -> 0142)
[ 2.959996] ixgbe 0000:61:00.0: Multiqueue Enabled: Rx Queue count = 32, Tx Queue count = 32 XDP Queue count = 0
[ 2.960289] ixgbe 0000:61:00.0: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link)
[ 2.960372] ixgbe 0000:61:00.0: MAC: 2, PHY: 20, SFP+: 5, PBA No: E66560-006
[ 2.960375] ixgbe 0000:61:00.0: 90:e2:ba:xx:xx:xx
[ 2.961623] ixgbe 0000:61:00.0: Intel(R) 10 Gigabit Network Connection
[ 2.961852] ixgbe 0000:61:00.1: enabling device (0140 -> 0142)
[ 3.123988] ixgbe 0000:61:00.1: Multiqueue Enabled: Rx Queue count = 32, Tx Queue count = 32 XDP Queue count = 0
[ 3.124282] ixgbe 0000:61:00.1: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link)
[ 3.124365] ixgbe 0000:61:00.1: MAC: 2, PHY: 20, SFP+: 6, PBA No: E66560-006
[ 3.124368] ixgbe 0000:61:00.1: 90:e2:ba:xx:xx:xx
[ 3.125611] ixgbe 0000:61:00.1: Intel(R) 10 Gigabit Network Connection
[ 3.343887] ixgbe 0000:61:00.0 enp97s0f0: renamed from eth2
[ 3.383624] ixgbe 0000:61:00.1 enp97s0f1: renamed from eth3
[ 8.258680] ixgbe 0000:61:00.0: registered PHC device on enp97s0f0
[ 8.427706] ixgbe 0000:61:00.0 enp97s0f0: detected SFP+: 5
[ 8.510173] ixgbe 0000:61:00.1: registered PHC device on enp97s0f1
[ 8.691528] ixgbe 0000:61:00.0 enp97s0f0: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[ 8.759494] ixgbe 0000:61:00.1 enp97s0f1: detected SFP+: 6
[ 9.027530] ixgbe 0000:61:00.1 enp97s0f1: NIC Link is Up 10 Gbps, Flow Control: RX/TX


root@HYPM003:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 3c:ec:ef:8d:f5:84 brd ff:ff:ff:ff:ff:ff
altname enp96s0f0
3: eno2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 3c:ec:ef:8d:f5:85 brd ff:ff:ff:ff:ff:ff
altname enp96s0f1
4: enp97s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 2a:ea:1b:ed:7c:8a brd ff:ff:ff:ff:ff:ff permaddr 90:e2:ba:xx:xxxx
5: enp97s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 2a:ea:1b:ed:7c:8a brd ff:ff:ff:ff:ff:ff permaddr 90:e2:ba:xx:xx:xx
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 2a:ea:1b:ed:7c:8a brd ff:ff:ff:ff:ff:ff
7: bond0.20@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr20 state UP group default qlen 1000
link/ether 2a:ea:1b:ed:7c:8a brd ff:ff:ff:ff:ff:ff
8: vmbr20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 2a:ea:1b:ed:7c:8a brd ff:ff:ff:ff:ff:ff
9: bond0.90@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr90 state UP group default qlen 1000
link/ether 2a:ea:1b:ed:7c:8a brd ff:ff:ff:ff:ff:ff
10: vmbr90: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 2a:ea:1b:ed:7c:8a brd ff:ff:ff:ff:ff:ff
11: bond0.91@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr91 state UP group default qlen 1000
link/ether 2a:ea:1b:ed:7c:8a brd ff:ff:ff:ff:ff:ff
12: vmbr91: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 2a:ea:1b:ed:7c:8a brd ff:ff:ff:ff:ff:ff
inet 10.169.91.11/24 scope global vmbr91
valid_lft forever preferred_lft forever



root@HYPM003:~# cat /etc/network/interfaces

auto lo
iface lo inet loopback

iface eno1 inet manual

auto eno2
iface eno2 inet manual

auto enp97s0f0
iface enp97s0f0 inet manual
#AGR p7

auto enp97s0f1
iface enp97s0f1 inet manual
#AGR p8

auto bond0
iface bond0 inet manual
bond-slaves enp97s0f0 enp97s0f1
bond-miimon 100
bond-mode active-backup
bond-primary enp97s0f0
#Trunk AGR p7+p8

iface bond0.20 inet manual

iface bond0.90 inet manual

iface bond0.91 inet manual

iface bond0.100 inet manual

iface bond0.110 inet manual

iface bond0.111 inet manual

iface bond0.115 inet manual

iface bond0.120 inet manual

iface bond0.140 inet manual

iface bond0.150 inet manual

iface bond0.160 inet manual

iface bond0.230 inet manual

auto bond0.2
iface bond0.2 inet manual

auto vmbr20
iface vmbr20 inet manual
bridge-ports bond0.20
bridge-stp off
bridge-fd 0


auto vmbr90
iface vmbr90 inet manual
bridge-ports bond0.90
bridge-stp off
bridge-fd 0


auto vmbr91
iface vmbr91 inet static
address 10.169.91.11/24
gateway 10.169.91.254
bridge-ports bond0.91
bridge-stp off
bridge-fd 0


auto vmbr100
iface vmbr100 inet manual
bridge-ports bond0.100
bridge-stp off
bridge-fd 0


auto vmbr110
iface vmbr110 inet manual
bridge-ports bond0.110
bridge-stp off
bridge-fd 0


auto vmbr111
iface vmbr111 inet manual
bridge-ports bond0.111
bridge-stp off
bridge-fd 0


auto vmbr115
iface vmbr115 inet manual
bridge-ports bond0.115
bridge-stp off
bridge-fd 0


auto vmbr120
iface vmbr120 inet manual
bridge-ports bond0.120
bridge-stp off
bridge-fd 0


auto vmbr140
iface vmbr140 inet manual
bridge-ports bond0.140
bridge-stp off
bridge-fd 0


auto vmbr150
iface vmbr150 inet manual
bridge-ports bond0.150
bridge-stp off
bridge-fd 0


auto vmbr160
iface vmbr160 inet manual
bridge-ports bond0.160
bridge-stp off
bridge-fd 0


auto vmbr230
iface vmbr230 inet manual
bridge-ports bond0.230
bridge-stp off
bridge-fd 0


auto vmbr2
iface vmbr2 inet manual
bridge-ports bond0.2
bridge-stp off
bridge-fd 0





Thanks in advance for your help.
 
Last edited:
Can you try to use iommu settings depening on what hw you have (intel or amd):

  • sed -i '$ s/$/ amd_iommu=on iommu=pt/' /etc/kernel/cmdline
  • sed -i '$ s/$/ intel_iommu=on iommu=pt/' /etc/kernel/cmdline

after that you need to do a proxmox-boot-tool refresh && update-initramfs -u and reboot. Then try again.
 
Hi Bryan,

no idea you found a solution meanwhile.

But I had a similar occurrence today. I've updated from 7.x to 8.1.3, but the new kernel was not used.
Because the header files are installed for the new kernel, but not for the old 5.13 I was unable to install the latest ixgbe driver for our card.
https://www.intel.com/content/www/u...ethernet-network-connections-under-linux.html

I used proxmox-boot-tool kernel list and see, the old kernel was pinned. Then I unpin the old one using proxmox-boot-tool kernel unpin.
After a reboot proxmox is running with on the new kernel.

Inside the drivers readme file are a lot of hints tuning the connection. But I don't need them at the moment.
With the new driver installed the bandwidth is up to 9.40 Gbits/sec which is ok I think.

Good luck
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!