Very slow iperf speeds (~370Mbps) with proxmox host 10Gbps NIC

ramreddy

New Member
Jul 3, 2023
24
0
1
Hi All,

Bit of newbie to setting up home servers and Proxmox. Any pointer would be of great help.
I have configured Proxmox server on a Supermicro server and connected 10-Gigabit X540-AT2 NIC through Asus router that has 10Gbps ports.

Using same 10Gbps NIC port for proxmox internal networking of VMs and containers. To summarize, I have to 1Gbps ports and two 10Gbps ports but I am using just one 10Gbps port for all connectivity (internal and external).

When I tested network speeds with in VMs and Proxmox host, I am getting full 10Gbps network speed. But when I tested with a workstation/laptop that is outside of Proxmox host, I am getting very low speeds like maximum of 370Mbps (not even full 1 Gbps :-( )
Did same test with VMs inside the proxmox and result is same (will be same I guess).

Following is the result of iperf

Code:
PS C:\Users\ramap\Downloads\iperf-3.1.3-win64> .\iperf3.exe -c 192.168.50.50
Connecting to host 192.168.50.50, port 5201
[  4] local 192.168.50.100 port 51069 connected to 192.168.50.50 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  30.5 MBytes   256 Mbits/sec
[  4]   1.00-2.01   sec  34.1 MBytes   283 Mbits/sec
[  4]   2.01-3.00   sec  43.0 MBytes   365 Mbits/sec
[  4]   3.00-4.01   sec  41.0 MBytes   341 Mbits/sec
[  4]   4.01-5.00   sec  38.6 MBytes   327 Mbits/sec
[  4]   5.00-6.00   sec  32.4 MBytes   270 Mbits/sec
[  4]   6.00-7.01   sec  37.1 MBytes   311 Mbits/sec
[  4]   7.01-8.00   sec  43.9 MBytes   370 Mbits/sec
[  4]   8.00-9.00   sec  40.6 MBytes   340 Mbits/sec
[  4]   9.00-10.00  sec  40.9 MBytes   343 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec   382 MBytes   320 Mbits/sec                  sender
[  4]   0.00-10.00  sec   382 MBytes   320 Mbits/sec                  receiver

iperf Done.


Code:
root@pve-1:~# lspci -v | grep Ethernet
01:00.0 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01)
01:00.1 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01)
09:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
        DeviceName:  Intel Ethernet i210AT #1
0a:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
        DeviceName:  Intel Ethernet i210AT #2


Code:
root@pve-1:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface ens3f0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.50.50/24
        gateway 192.168.50.1
        bridge-ports ens3f0
        bridge-stp off
        bridge-fd 0

iface eno1 inet manual

iface eno2 inet manual

iface ens3f1 inet manual

Code:
root@pve-1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ac:1f:6b:00:5e:e0 brd ff:ff:ff:ff:ff:ff
    altname enp9s0
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ac:1f:6b:00:5e:e1 brd ff:ff:ff:ff:ff:ff
    altname enp10s0
4: ens3f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether 0c:c4:7a:b7:3d:30 brd ff:ff:ff:ff:ff:ff
    altname enp1s0f0
5: ens3f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0c:c4:7a:b7:3d:31 brd ff:ff:ff:ff:ff:ff
    altname enp1s0f1
6: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0c:c4:7a:b7:3d:30 brd ff:ff:ff:ff:ff:ff
    inet 192.168.50.50/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::ec4:7aff:feb7:3d30/64 scope link
       valid_lft forever preferred_lft forever
7: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr100i0 state UNKNOWN group default qlen 1000
    link/ether 62:f3:29:c5:85:f0 brd ff:ff:ff:ff:ff:ff
8: fwbr100i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 06:51:ad:15:f2:f8 brd ff:ff:ff:ff:ff:ff
9: fwpr100p0@fwln100i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 16:ea:fa:43:fa:2f brd ff:ff:ff:ff:ff:ff
10: fwln100i0@fwpr100p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr100i0 state UP group default qlen 1000
    link/ether 22:82:d1:48:88:0e brd ff:ff:ff:ff:ff:ff
19: tap101i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr101i0 state UNKNOWN group default qlen 1000
    link/ether 52:61:4e:44:ef:e7 brd ff:ff:ff:ff:ff:ff
20: fwbr101i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 86:73:5a:2d:93:48 brd ff:ff:ff:ff:ff:ff
21: fwpr101p0@fwln101i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether ee:74:ac:92:7f:4e brd ff:ff:ff:ff:ff:ff
22: fwln101i0@fwpr101p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr101i0 state UP group default qlen 1000
    link/ether 32:bc:05:59:ef:e3 brd ff:ff:ff:ff:ff:ff
root@pve-1:~#

Code:
root@pve-1:~# ethtool vmbr0
Settings for vmbr0:
        Supported ports: [  ]
        Supported link modes:   Not reported
        Supported pause frame use: No
        Supports auto-negotiation: No
        Supported FEC modes: Not reported
        Advertised link modes:  Not reported
        Advertised pause frame use: No
        Advertised auto-negotiation: No
        Advertised FEC modes: Not reported
        Speed: 10000Mb/s
        Duplex: Unknown! (255)
        Auto-negotiation: off
        Port: Other
        PHYAD: 0
        Transceiver: internal
        Link detected: yes
root@pve-1:~#

Code:
root@pve-1:~# ethtool ens3f0
Settings for ens3f0:
        Supported ports: [ TP ]
        Supported link modes:   100baseT/Full
                                1000baseT/Full
                                10000baseT/Full
        Supported pause frame use: Symmetric
        Supports auto-negotiation: Yes
        Supported FEC modes: Not reported
        Advertised link modes:  100baseT/Full
                                1000baseT/Full
                                10000baseT/Full
        Advertised pause frame use: Symmetric
        Advertised auto-negotiation: Yes
        Advertised FEC modes: Not reported
        Speed: 10000Mb/s
        Duplex: Full
        Auto-negotiation: on
        Port: Twisted Pair
        PHYAD: 0
        Transceiver: internal
        MDI-X: Unknown
        Supports Wake-on: d
        Wake-on: d
        Current message level: 0x00000007 (7)
                               drv probe link
        Link detected: yes


I have looked at other forum threads, and tried to paste output of relevant commands. Not able to spot issue from my novice perspective.
Thanks a lot for help.
 
Could be a lot of reasons. Did you connect your laptop by cable or wifi? The speeds look a little bit like the typical wifi connection. What network setup do you have? Switch, router/firewall, etc.?
 
  • Like
Reactions: ramreddy
Using same 10Gbps NIC port for proxmox internal networking of VMs and containers.
What do you mean here ?
because networking between VMs and containers and host is done over the vmbr0 bridge where there isn't speed limit except your processor.
 
Could be a lot of reasons. Did you connect your laptop by cable or wifi? The speeds look a little bit like the typical wifi connection. What network setup do you have? Switch, router/firewall, etc.?
I am on Wifi 6 network, pretty close proximity to Router (Asus GT AXE16000). I can understand Wifi being slow but I the speeds I am getting between proxmox host and my Windows laptop are way too less compared to internet speeds (close to 1Gbps) on same windows machine using same router. There are no other switches in between.

What do you mean here ?
because networking between VMs and containers and host is done over the vmbr0 bridge where there isn't speed limit except your processor.
I meant to say that I am not using other two 1Gbps NIC ports and 10 Gbps nic port just to clarify than I am not using 1Gbps nic. But not sure if it's coincidence that iperf tool is giving almost exactly 10Gbps between host, containers (basically within proxmox).


Try OVS Bridge
I do not know what this one is. Will do some digging on my side.
 
I meant to say that I am not using other two 1Gbps NIC ports and 10 Gbps nic port just to clarify than I am not using 1Gbps nic. But not sure if it's coincidence that iperf tool is giving almost exactly 10Gbps between host, containers (basically within proxmox).
Try the reverse speed iperf3 with -R.
+ What's your CPU ? how many and type of vCPU of VMs ? is a VM or a LXC ? is Linux or Windows Guest ?
 
  • Like
Reactions: ramreddy
Try the reverse speed iperf3 with -R.
+ What's your CPU ? how many and type of vCPU of VMs ? is a VM or a LXC ? is Linux or Windows Guest ?
I will do iperf test with with additional parameter -R. I have tested iperf tests from Proxmox host to Windows laptop and vice versa i.e. both directions but results were similar.

CPU : Xeon E5-2687W v4 (12 core 3.00 GHz LGA2011-3)
Motherboard : Supermicro X10SRL-F
Memory : 256GB ddr4 ECC
NIC : Supermicro AOC-STG-i2T Dual Port 10GB
Router : Asus ROG Rapture GT-AXE16000

Installed Proxmox 8 on baremetal server with above specs. I am running iperf from my windows 11 laptop and Desktop (connected with Wifi) with may be 8 feet from router that has full signal strength of WiFi6 5Ghz bands (have 2 of these which is offering approx 4804Mbps bandwidth for each channel).

Given proxmox host itself has less speed, iperf tests that I have done with VMs inside proxmox are similar (bit less performant). So, in this case, I can eliminate internal VMs and containers from the picture.
 
Last edited:
you can try iperf3 with -P 16 , to use 16 threads
wireless and internet speeds get their top speed thanks to multi threads.
speedtest.net is by default "Multi" threads, compare with "Single" thread. ( here 400 Mbps/s to 80 Mbps/s )
+ you still need test with wired connection.
 
  • Like
Reactions: ramreddy
you can try iperf3 with -P 16 , to use 16 threads
wireless and internet speeds get their top speed thanks to multi threads.
speedtest.net is by default "Multi" threads, compare with "Single" thread. ( here 400 Mbps/s to 80 Mbps/s )
+ you still need test with wired connection.
Will get back with results of trying with -R and also multiple threads.

Will get to wired connection part. I hate that new laptops dont have LAN ports but will try to get some connector to enable wired connection.
 
btw, iperf3 between pve host and alpine container is about here (while NICs are 1Gbits/s) :
32 Gbits/s on Xeon 4210
18 Gbits/s (-R 26 Gbits/s) on EPYC 7302P
16 Gbits/s on E5-2620 v2
65 Gbits/s on i5-8600K !
45 Gbits/s on i7-6700HQ (laptop)
40 Gbits/s on E3-1220 v3
edit: add some another test
 
Last edited:
  • Like
Reactions: ramreddy
btw, iperf3 between pve host and alpine container is about 20 Gbits/s (-R 30 Gbits/s) here on a Xeon 4210 and an EPYC 7302P (while NICs are 1Gbits/s)
Interesting, can you please share some pointers on how network is configured. I have my vmbr0 configuration as below


Code:
root@pve-1:~# ethtool vmbr0
Settings for vmbr0:
        Supported ports: [  ]
        Supported link modes:   Not reported
        Supported pause frame use: No
        Supports auto-negotiation: No
        Supported FEC modes: Not reported
        Advertised link modes:  Not reported
        Advertised pause frame use: No
        Advertised auto-negotiation: No
        Advertised FEC modes: Not reported
        Speed: 10000Mb/s
        Duplex: Unknown! (255)
        Auto-negotiation: off
        Port: Other
        PHYAD: 0
        Transceiver: internal
        Link detected: yes
root@pve-1:~#

By default speed is set to 10000Mb/s, did you have to manually configure any options?
 
nothing changed.
because vmbr0 is a virtual bridge interface, reported speed value is only cosmetic not real.
 
Last edited:
btw, iperf3 between pve host and alpine container is about here (while NICs are 1Gbits/s) :
32 Gbits/s on Xeon 4210
18 Gbits/s (-R 26 Gbits/s) on EPYC 7302P
16 Gbits/s on E5-2620 v2
65 Gbits/s on i5-8600K !
45 Gbits/s on i7-6700HQ (laptop)
40 Gbits/s on E3-1220 v3
edit: add some another test
Results seems really good and E5-2687W v4 has better specs than E3-1220 v3 on paper, but out of box installation/configuration of proxmox is not giving anywhere close to even compare. Not sure where to start looking into. I guess I am reasonably happy for now with 10Gbps I get between pve host and containers/VMs.

Low speed between PVE host and external servers/laptops is a bummer, especially after I upgraded router and NICs to accommodate 10Gbps networking.
 
Apologies for delay in reply. Just got off of the work.

iperf3 test with -R produced almost same results as initial one (maximum of ~420 Mbps)

Adding -P 16 provided comparitively more speed (~840 Mbps) but still a far cry from 10Gbps :-(
 
I connected my laptop to Router 10Gbps port with a connector to Thunderbolt port. This time speed is slightly increased but maximum is around 800-900 Mbps.

Weird side note is that I am able to ssh into a VM inside proxmox but not to proxmox itself in wired connection mode. So, I have used VM inside proxmox to run iperf3 test instead of proxmox host itself.

I disabled firewall from my Mcafee antivirus. Not sure what is blocking access to SSH and iperf of proxmox host in wired connection mode. If I switch to Wifi mode, I could connect to Proxmox host.
 
NIC is virtualized within VMs , it's different from LXC/container.
Here on i5-8600k , Windows 10 VM > Alpine LXC Container iperf3 : 28 Gbits/s / 17 Gbits/s -R (was 65 Gbits/s from pve host > Alpine LXC)
(btw, I set kernel option mitigations=off on all my hosts ... )
 
  • Like
Reactions: ramreddy
I can take a look around to change mitigations attribute for vms.

But original issue of not able to get 10gbps for proxmox host even 8n wired connection mode still exists. Unfortunately I dont have anything to try. OVS bridge suggestion seems to be for Network within proxmox (for vms and containers). For proxmox/pve host itself, we are dealing with simple direct network connection between two hosts with router in between.
 
mitigations=off is for host kernel directly.
your router act like a switch actually if you keep in same local subnet.
btw, try with direct connection between the pve host and the wired host.
 
  • Like
Reactions: ramreddy
mitigations=off is for host kernel directly.
your router act like a switch actually if you keep in same local subnet.
btw, try with direct connection between the pve host and the wired host.
Thanks a lot for correcting my assumption. Let me try to modify mitigations of pve host kernel. May be with direct connection between pve host and wired host might show any issue with router if possible.
 
mitigations=off is for host kernel directly.
your router act like a switch actually if you keep in same local subnet.
btw, try with direct connection between the pve host and the wired host.
Tested with migrations=off for kernel of pve host and there is almost no difference in speed in iperf test.

PS C:\Users\Ram\Downloads\iperf-3.1.3-win64\iperf-3.1.3-win64> .\iperf3.exe -c 192.168.50.50 Connecting to host 192.168.50.50, port 5201 [ 4] local 192.168.50.55 port 52374 connected to 192.168.50.50 port 5201 [ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 31.5 MBytes 263 Mbits/sec [ 4] 1.00-2.00 sec 28.2 MBytes 237 Mbits/sec [ 4] 2.00-3.00 sec 28.6 MBytes 240 Mbits/sec [ 4] 3.00-4.00 sec 32.0 MBytes 268 Mbits/sec [ 4] 4.00-5.01 sec 28.5 MBytes 238 Mbits/sec [ 4] 5.01-6.01 sec 28.8 MBytes 241 Mbits/sec [ 4] 6.01-7.00 sec 27.9 MBytes 235 Mbits/sec [ 4] 7.00-8.00 sec 29.5 MBytes 248 Mbits/sec [ 4] 8.00-9.00 sec 28.6 MBytes 240 Mbits/sec [ 4] 9.00-10.00 sec 27.9 MBytes 234 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 292 MBytes 245 Mbits/sec sender [ 4] 0.00-10.00 sec 291 MBytes 244 Mbits/sec receiver iperf Done.

PS C:\Users\Ram\Downloads\iperf-3.1.3-win64\iperf-3.1.3-win64> .\iperf3.exe -c 192.168.50.50 -R Connecting to host 192.168.50.50, port 5201 Reverse mode, remote host 192.168.50.50 is sending [ 4] local 192.168.50.55 port 52444 connected to 192.168.50.50 port 5201 [ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 44.9 MBytes 376 Mbits/sec [ 4] 1.00-2.00 sec 47.1 MBytes 395 Mbits/sec [ 4] 2.00-3.00 sec 48.0 MBytes 402 Mbits/sec [ 4] 3.00-4.00 sec 52.4 MBytes 440 Mbits/sec [ 4] 4.00-5.00 sec 52.2 MBytes 438 Mbits/sec [ 4] 5.00-6.00 sec 49.9 MBytes 418 Mbits/sec [ 4] 6.00-7.00 sec 51.8 MBytes 435 Mbits/sec [ 4] 7.00-8.00 sec 51.6 MBytes 432 Mbits/sec [ 4] 8.00-9.00 sec 46.5 MBytes 389 Mbits/sec [ 4] 9.00-10.00 sec 46.5 MBytes 391 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 493 MBytes 413 Mbits/sec 0 sender [ 4] 0.00-10.00 sec 491 MBytes 412 Mbits/sec receiver iperf Done.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!