Help: 10Gb NICs but only getting 941Mb/s (1Gb) speeds with iperf

5mart3ch

Active Member
Feb 22, 2018
20
0
41
25
I have two Mellanox ConnectX-3 10GbE NICs installed on each proxmox server. I checked NIC link speed at 10Mb with ethtool. However, running iperf tests I get 941Mbits/sec. What am I missing?

Code:
# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size:  128 KByte (default)
------------------------------------------------------------
[  4] local 10.77.1.82 port 5001 connected with 10.77.1.81 port 35467
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  1.10 GBytes   939 Mbits/sec

Code:
# lspci
01:00.0 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3]

Code:
# ethtool enp1s0
Settings for enp1s0:
        Supported ports: [ FIBRE ]
        Supported link modes:   1000baseKX/Full
                                10000baseKX4/Full
                                10000baseKR/Full
                                40000baseCR4/Full
                                40000baseSR4/Full
                                56000baseCR4/Full
                                56000baseSR4/Full
        Supported pause frame use: Symmetric Receive-only
        Supports auto-negotiation: Yes
        Supported FEC modes: Not reported
        Advertised link modes:  1000baseKX/Full
                                10000baseKX4/Full
                                10000baseKR/Full
                                40000baseCR4/Full
                                40000baseSR4/Full
        Advertised pause frame use: Symmetric
        Advertised auto-negotiation: Yes
        Advertised FEC modes: Not reported
        Speed: 10000Mb/s
        Duplex: Full
        Port: Direct Attach Copper
        PHYAD: 0
        Transceiver: internal
        Auto-negotiation: off
        Supports Wake-on: d
        Wake-on: d
        Current message level: 0x00000014 (20)
                               link ifdown
        Link detected: yes

Code:
~# cat /etc/network/interfaces

auto lo
iface lo inet loopback

iface eno1 inet manual

iface enp1s0 inet manual
        mtu 9000

auto vmbr0
iface vmbr0 inet static
        address 10.77.1.24/24
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0

auto vmbr10
iface vmbr10 inet static
        address 10.77.1.82/24
        gateway 10.77.1.1
        bridge-ports enp1s0
        bridge-stp off
        bridge-fd 0
        mtu 9000
#NAS 10G Port

Code:
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether 10:98:36:b5:48:ed brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr1 state UP group default qlen 1000
    link/ether 10:98:36:b5:48:ee brd ff:ff:ff:ff:ff:ff
4: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000
    link/ether 00:02:c9:37:c6:10 brd ff:ff:ff:ff:ff:ff
    inet 10.77.1.81/24 scope global enp1s0
       valid_lft forever preferred_lft forever
    inet6 fe80::202:c9ff:fe37:c610/64 scope link
       valid_lft forever preferred_lft forever

Switch port at 10G:
switch-port-config.jpg

Edited: added network configuration and switch port status
 
Last edited:
try to use different subnets , don't use 10.77.1.X/24 for both vmbr0 && vmbr10.
Did not work. In fact, I got worse results.

Code:
# iperf -c 10.77.10.81 -P 3
------------------------------------------------------------
Client connecting to 10.77.10.81, TCP port 5001
TCP window size:  325 KByte (default)
------------------------------------------------------------
[  4] local 10.77.10.82 port 38108 connected with 10.77.10.81 port 5001
[  3] local 10.77.10.82 port 38106 connected with 10.77.10.81 port 5001
[  5] local 10.77.10.82 port 38110 connected with 10.77.10.81 port 5001
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.1 sec   323 KBytes   261 Kbits/sec
[  3]  0.0-10.1 sec   323 KBytes   261 Kbits/sec
[  5]  0.0-10.1 sec   323 KBytes   261 Kbits/sec
[SUM]  0.0-10.1 sec   970 KBytes   784 Kbits/sec
 
The subnet needs to be different on both the NAS and Host, which you probably already knew.
Before you changed the subnet to 10.77.10.0/24 it was connecting to the NAS over the 1Gb line every time so when you changed subnets it didn't make it worse, it made it run over the correct NIC.
Can you ping the NAS on the new subnet?
Could be a problem with the cable, or possibly one of the cards, I have a card on a host that seems to want to run at 4-6Gb/s and has tons of retries. So I think the card is having issues.
Looks like you are using iperf try installing iperf3 I believe it has some newer functions and can tell you the retries for packets that are lost and need retransmitted.

Attached is a screen shot of my setup.
bond2 is a 10Gb connection to SAN the other side has the IP 10.10.10.4/28
so on the SAN I would run iperf3 -s
and on the host I would run iperf3 -c 10.10.10.4
and see what happens.

vmbr1 is the 10Gb connection between my nodes for migration and other stuff
vmbr0 is outbound for hosts and management port.
 

Attachments

  • 2021-07-07_15-45-56.png
    2021-07-07_15-45-56.png
    34.4 KB · Views: 67
The subnet needs to be different on both the NAS and Host, which you probably already knew.
Before you changed the subnet to 10.77.10.0/24 it was connecting to the NAS over the 1Gb line every time so when you changed subnets it didn't make it worse, it made it run over the correct NIC.
Can you ping the NAS on the new subnet?
Could be a problem with the cable, or possibly one of the cards, I have a card on a host that seems to want to run at 4-6Gb/s and has tons of retries. So I think the card is having issues.
Looks like you are using iperf try installing iperf3 I believe it has some newer functions and can tell you the retries for packets that are lost and need retransmitted.

Attached is a screen shot of my setup.
bond2 is a 10Gb connection to SAN the other side has the IP 10.10.10.4/28
so on the SAN I would run iperf3 -s
and on the host I would run iperf3 -c 10.10.10.4
and see what happens.

vmbr1 is the 10Gb connection between my nodes for migration and other stuff
vmbr0 is outbound for hosts and management port.

Yay! I got it to work but with one odd issue. If both 1G and 10G are on the same subnet then I am capped at 1G speeds. But if 1G and 10G are on different subnets then iperf for 10G does not connect to each other? I can ping each other successfully on both sides. I suppose is the default gateway but I don't know how to configure this properly.

proxmox-network.jpg

I have to use the iperf3 -B option (bind to the interface associated with the address <host>) to get it to work.
Host IP address is 10.77.10.81 10G other side.
NAS IP address is 10.77.10.82 10G and 10.77.1.24 1G local line:
Code:
# iperf3 -c 10.77.10.81 -B 10.77.1.24
Connecting to host 10.77.10.81, port 5201
[  5] local 10.77.1.24 port 45247 connected to 10.77.10.81 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  1.10 GBytes  9.43 Gbits/sec    0   1.42 MBytes
[  5]   1.00-2.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.42 MBytes
[  5]   2.00-3.00   sec  1.09 GBytes  9.41 Gbits/sec    0   1.42 MBytes
[  5]   3.00-4.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.78 MBytes
[  5]   4.00-5.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.78 MBytes
[  5]   5.00-6.00   sec  1.09 GBytes  9.41 Gbits/sec    0   1.78 MBytes
[  5]   6.00-7.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.78 MBytes
[  5]   7.00-8.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.78 MBytes
[  5]   8.00-9.00   sec  1.09 GBytes  9.41 Gbits/sec    0   1.78 MBytes
[  5]   9.00-10.00  sec  1.09 GBytes  9.40 Gbits/sec    0   1.78 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  10.9 GBytes  9.40 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  10.9 GBytes  9.40 Gbits/sec                  receiver

If I leave it out then I get this:
Code:
# iperf3 -c 10.77.10.81
Connecting to host 10.77.10.81, port 5201
[  5] local 10.77.10.82 port 60398 connected to 10.77.10.81 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   323 KBytes  2.65 Mbits/sec    7   8.74 KBytes
[  5]   1.00-2.00   sec  0.00 Bytes  0.00 bits/sec    1   8.74 KBytes
[  5]   2.00-3.00   sec  0.00 Bytes  0.00 bits/sec    0   8.74 KBytes
[  5]   3.00-4.00   sec  0.00 Bytes  0.00 bits/sec    1   8.74 KBytes
[  5]   4.00-5.00   sec  0.00 Bytes  0.00 bits/sec    0   8.74 KBytes
[  5]   5.00-6.00   sec  0.00 Bytes  0.00 bits/sec    0   8.74 KBytes
[  5]   6.00-7.00   sec  0.00 Bytes  0.00 bits/sec    1   8.74 KBytes
[  5]   7.00-8.00   sec  0.00 Bytes  0.00 bits/sec    0   8.74 KBytes
[  5]   8.00-9.00   sec  0.00 Bytes  0.00 bits/sec    0   8.74 KBytes
[  5]   9.00-10.00  sec  0.00 Bytes  0.00 bits/sec    0   8.74 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   323 KBytes   265 Kbits/sec   10             sender
[  5]   0.00-10.00  sec  0.00 Bytes  0.00 bits/sec                  receiver
 
Last edited:
So for the part of being capped at 1Gb you aren't using the 10Gb when they are on the same subnet.
It will be using the 1Gb line at that point. When you have them on two different subnets and you go to 10.77.10.X for ping or iperf
it knows it has to use that 10Gb connection because it is the only way to connect based on the IP address assigned to that NIC.

If this is a fresh install of Proxmox 6.X it comes with ifupdown not ifupdown2. ifupdown2 can apply network changes without reboot, and you need to install it manually. Just adding that info incase you didn't know or haven't installed ifupdown2 then any network changes you make won't actually take affect until you reboot the host. Proxmox 7 has ifupdown2 preinstalled.
You shouldn't have to bind your iperf to the 1Gb NIC address for it to then talk over the 10Gb NIC, so that is a strange one. Which is why I think the network settings might not be applied until reboot.

I found out there can only be one gateway on Proxmox host. That is only there if you need to get to a subnet that the host doesn't know about. So if you have another VLAN internally or even if it has to reach out to the internet it asks the gateway router to connect it to where ever it needs to go.
So if you have two things on the same subnet and they don't need to talk to any other subnet over that connection there doesn't need to be a gateway entry, they will just be able to talk over that link.


So if I have IP addresses 10.1.1.0/24 programmed into my servers and I have my PC VLAN setup as 10.2.2.0/24 they can talk by using a layer3 router.
You program the different VLANs into the router VLAN 1100 (10.1.1.0/24) and VLAN 1200 (10.2.2.0/24) and then when you setup the IPs for the different devices they will use the routers IP as the gateway. The router will then do the inter VLAN routing.

If that is redundant information for you already know it, then I apologize, but those kind of things is important to figuring out your issue. I also find that extra information can help with the troubleshooting. Since we know that you can get a solid 10Gb link in some fashion and you can ping the each other on the 10Gb subnet then there has to be some networking shenanigans' that need to be worked out.
 
Thanks, your information is good. Yes, I installed ifupdown2 on my Proxmox 6.4 for the convenience. I don't think it is the network changes taking affect as I tried ifupdown2 first. Didn't work. Then reboot both servers just in case and didn't work. Also, tried just having one subnet on the 10Gb vmbr10 only (i.e., by removing all settings, IP and gateway, on vmbr0), rebooted both servers, but still iperf3 capped at 1Gb speed test.

I do believe it is somewhere in my network settings. Could be my pfSense. It's a vm on proxmox handling all my networks. No VLAN setup as I have not gotten to it yet.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!