[SOLVED] Testing speed of new dual NIC

charfix

New Member
Dec 2, 2022
15
1
3
I just installed a PCI NIC with two 2.5 Gbps ports. I connected the two ports with a cat6 patch cable.
I would like to test whether the ports can actually receive/transmit at their advertised speed.
What I thought I should do is run iperf -s on one interface (enp3s0), and then use the other with iperf -c (enp4s0).
But it's not clear how to force packets to come out of a particular interface (enp4s0).
How might I achieve this?

I thought of assigning enp3s0 and enp4s0 static IPs, and creating a container hooked up to a new bridge interface with enp4s0, then running iperf from it.
But I must have misconfigured something because the container complained with I tried talking to the iperf sever.
 
Hi,

Good if we see the network configuration on your PVE server… However, I guess you can achieve this using the -B flag (see `man iperf`) on iperf e.g.:

Bash:
iperf -s -B 10.0.0.1

On the client using the -c flag you can specify the interface to use including the -B option followed by the IP address of the interface e.g:

Bash:
iperf -c 10.0.0.1 -B 10.0.0.2
 
/etc/network/interfaces
Code:
auto lo
iface lo inet loopback

iface enp10s0 inet manual

auto enp3s0
iface enp3s0 inet dhcp

auto enp4s0
iface enp4s0 inet dhcp

auto vmbr0
iface vmbr0 inet manual
        address 192.168.2.112/24
        gateway 192.168.2.1
        bridge-ports enp10s0
        bridge-stp off
        bridge-fd 0
Meanwhile, I assigned enp3s0 and enp4s0 IPs manually with ifconfig enpXs0 192.168.4.X/24 so
ip a shows:

Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 88:c9:b3:b0:dd:f7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.4.1/24 brd 192.168.4.255 scope global enp3s0
       valid_lft forever preferred_lft forever
3: enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 88:c9:b3:b0:dd:f8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.4.2/24 brd 192.168.4.255 scope global enp4s0
       valid_lft forever preferred_lft forever
    inet6 fe80::8ac9:b3ff:feb0:ddf8/64 scope link 
       valid_lft forever preferred_lft forever
4: enp10s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether 50:e5:49:3d:ae:f3 brd ff:ff:ff:ff:ff:ff
5: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 50:e5:49:3d:ae:f3 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.112/24 scope global vmbr0
       valid_lft forever preferred_lft forever
6: veth103i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr103i0 state UP group default qlen 1000
    link/ether fe:95:88:e2:e2:de brd ff:ff:ff:ff:ff:ff link-netnsid 0
7: fwbr103i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ee:aa:39:d9:db:82 brd ff:ff:ff:ff:ff:ff
8: fwpr103p0@fwln103i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 22:58:50:9e:ae:9b brd ff:ff:ff:ff:ff:ff
9: fwln103i0@fwpr103p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr103i0 state UP group default qlen 1000
    link/ether ee:fe:44:21:2b:64 brd ff:ff:ff:ff:ff:ff
10: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether 26:c5:97:49:d2:8d brd ff:ff:ff:ff:ff:ff

I just tried
iperf3 -s -B 192.168.4.1 and iperf3 -c 192.168.4.1 -B 192.168.4.2 but it's showing ~40 Gbps which I think means the packets never actually go over the wire.

How do I force them to?
 
I found a ready-to-use solution on the web. Lightly modified, it is:

Code:
#/bin/bash

#echo $1
#echo $2

# Use ip -d link list to see maxmtu value of interface
# maximum transmission unit (MTU) is a measurement representing the largest data packet that a network-connected device will accept.
#ip link set $1 mtu 9000
#ip link set $2 mtu 9000
# ----


#echo Thanks
ip netns add ns_server
ip netns add ns_client
ip link set $1 netns ns_server
ip link set $2 netns ns_client
ip netns exec ns_server ip addr add dev $1 192.168.10.1/24
ip netns exec ns_client ip addr add dev $2 192.168.10.2/24
ip netns exec ns_server ip link set dev $1 up
ip netns exec ns_client ip link set dev $2 up
ip netns exec ns_server iperf3 -s &
ip netns exec ns_client iperf3 -c 192.168.10.1

killall iperf3

ip netns exec ns_client iperf3 -s &
ip netns exec ns_server iperf3 -c 192.168.10.2

killall iperf3

ip netns del ns_server
ip netns del ns_client

With a high MTU I was able to confirm the NIC maxes out its claimed speed.
 
I found a ready-to-use solution on the web. Lightly modified, it is:

Code:
#/bin/bash

#echo $1
#echo $2

# Use ip -d link list to see maxmtu value of interface
# maximum transmission unit (MTU) is a measurement representing the largest data packet that a network-connected device will accept.
#ip link set $1 mtu 9000
#ip link set $2 mtu 9000
# ----


#echo Thanks
ip netns add ns_server
ip netns add ns_client
ip link set $1 netns ns_server
ip link set $2 netns ns_client
ip netns exec ns_server ip addr add dev $1 192.168.10.1/24
ip netns exec ns_client ip addr add dev $2 192.168.10.2/24
ip netns exec ns_server ip link set dev $1 up
ip netns exec ns_client ip link set dev $2 up
ip netns exec ns_server iperf3 -s &
ip netns exec ns_client iperf3 -c 192.168.10.1

killall iperf3

ip netns exec ns_client iperf3 -s &
ip netns exec ns_server iperf3 -c 192.168.10.2

killall iperf3

ip netns del ns_server
ip netns del ns_client

With a high MTU I was able to confirm the NIC maxes out its claimed speed.
Hey - thanks, Just wanted to let you know there's a v2 with this added as an arg, along with threads and time :) https://github.com/crazy-logic/iPerfCableTest
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!