[SOLVED] Network speed stuck at 100Mb/s - Proxmox VE 8.0.2

robbdog21

Member
Aug 21, 2020
12
0
6
33
Hey,

I've just installed Proxmox VE 8.0.2 on two different types of PC and I'm having the same issue.
They both are stuck on 100Mb/s network speeds, not sure where to go from here.
I did fresh install's just last week and made sure they are both up to date.

Code:
root@TestLabServer:~# ethtool eno1
Settings for eno1:
        Supported ports: [ TP ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Supported pause frame use: No
        Supports auto-negotiation: Yes
        Supported FEC modes: Not reported
        Advertised link modes:  10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Advertised pause frame use: No
        Advertised auto-negotiation: Yes
        Advertised FEC modes: Not reported
        Speed: 100Mb/s
        Duplex: Full
        Auto-negotiation: on
        Port: Twisted Pair
        PHYAD: 1
        Transceiver: internal
        MDI-X: on (auto)
        Supports Wake-on: pumbg
        Wake-on: g
        Current message level: 0x00000007 (7)
                               drv probe link
        Link detected: yes

Code:
root@TestLabServer:~# pveversion -v
proxmox-ve: 8.0.2 (running kernel: 6.2.16-10-pve)
pve-manager: 8.0.4 (running version: 8.0.4/d258a813cfa6b390)
pve-kernel-6.2: 8.0.5
proxmox-kernel-helper: 8.0.3
proxmox-kernel-6.2.16-10-pve: 6.2.16-10
proxmox-kernel-6.2: 6.2.16-10
proxmox-kernel-6.2.16-8-pve: 6.2.16-8
pve-kernel-6.2.16-3-pve: 6.2.16-3

1693200060610.png
 
Last edited:
I can also confirm this behaviour.
My two Proxmox Hosts only used 10 Mbits in the beginning. After some trail and error I added the following line of code to /etc/network/interfaces
Code:
pre-up ethtool -K enp0s31f6 rx off tx off
After that the speed changed from 10Mbits to 100 Mbits.

I also checked my NIC with ethtool and it outputs the following:
Code:
root@pve2:~# ethtool enp0s31f6
Settings for enp0s31f6:
        Supported ports: [ TP ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Supported pause frame use: No
        Supports auto-negotiation: Yes
        Supported FEC modes: Not reported
        Advertised link modes:  1000baseT/Full
        Advertised pause frame use: No
        Advertised auto-negotiation: Yes
        Advertised FEC modes: Not reported
        Speed: 1000Mb/s
        Duplex: Full
        Auto-negotiation: on
        Port: Twisted Pair
        PHYAD: 1
        Transceiver: internal
        MDI-X: on (auto)
        Supports Wake-on: pumbg
        Wake-on: g
        Current message level: 0x00000007 (7)
                               drv probe link
        Link detected: yes
Since it showed 1000Mb/s and my FritzBox also showed the Proxmox host to be connected over a 1Gbs link I assumed everything should work now. But the speed got stuck on 100Mbits.
My finding was that if I execute
Code:
ethtool -s enp0s31f6 speed 1000 duplex full autoneg off
I get 1000Mbits (I tested it with iperf for more than 10 minutes) but the second I start a migration of a VM iperf drops back to the 100Mbits it had before and stays at that speed. No matter what happens in the Background.
Also even though I deactivated Auto-negotiation with the command if I execute the command and then display the status of the NIC again with Ethtool it still says Auto-negotiation on.
My Assumption so far was that Proxmox resets the NIC Settings before executing some actions like a VM migration. Can someone help us to resolve the issue?
 
Some FritzBoxes only have 100mBit on some "lower" ports, maybe this is the problem?
(I never encountered such a strange behaviour only with defective cables)
 
That was what I thought but I checked all my cables and I checked the UI

1693231292213.png

Whats more confusing is that when I perform an iperf test I reach nearly 1Gbs speed (Something around 940Mbits in one direction and 860Mbits in the other direction. Thats fine by me. But during a migration of a VM this drops back to 100 Mbits.

This is my situation right now:
pve1: ethtool eno1
Code:
Settings for eno1:
        Supported ports: [ TP ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Supported pause frame use: No
        Supports auto-negotiation: Yes
        Supported FEC modes: Not reported
        Advertised link modes:  1000baseT/Full
        Advertised pause frame use: No
        Advertised auto-negotiation: Yes
        Advertised FEC modes: Not reported
        Speed: 1000Mb/s
        Duplex: Full
        Auto-negotiation: on
        Port: Twisted Pair
        PHYAD: 1
        Transceiver: internal
        MDI-X: on (auto)
        Supports Wake-on: pumbg
        Wake-on: g
        Current message level: 0x00000007 (7)
                               drv probe link
        Link detected: yes

pve2: ethtool enp0s31f6
Code:
Settings for enp0s31f6:
        Supported ports: [ TP ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Supported pause frame use: No
        Supports auto-negotiation: Yes
        Supported FEC modes: Not reported
        Advertised link modes:  1000baseT/Full
        Advertised pause frame use: No
        Advertised auto-negotiation: Yes
        Advertised FEC modes: Not reported
        Speed: 1000Mb/s
        Duplex: Full
        Auto-negotiation: on
        Port: Twisted Pair
        PHYAD: 1
        Transceiver: internal
        MDI-X: on (auto)
        Supports Wake-on: pumbg
        Wake-on: g
        Current message level: 0x00000007 (7)
                               drv probe link
        Link detected: yes

If I now perform an iperf test from pve1 to pve2 it gives me following results:

Code:
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  5.19 MBytes  43.5 Mbits/sec  314   89.1 KBytes    
[  5]   1.00-2.00   sec  1.12 MBytes  9.38 Mbits/sec  206   55.1 KBytes    
[  5]   2.00-3.00   sec  1.12 MBytes  9.38 Mbits/sec  114   1.41 KBytes    
[  5]   3.00-4.00   sec  1.12 MBytes  9.38 Mbits/sec   83   1.41 KBytes    
[  5]   4.00-5.00   sec  1.49 MBytes  12.5 Mbits/sec  135   50.9 KBytes    
[  5]   5.00-6.00   sec  1.49 MBytes  12.5 Mbits/sec  141   46.7 KBytes    
[  5]   6.00-7.00   sec  1.12 MBytes  9.38 Mbits/sec  130   45.2 KBytes    
[  5]   7.00-8.00   sec  1.49 MBytes  12.5 Mbits/sec  133   48.1 KBytes    
[  5]   8.00-9.00   sec  1.12 MBytes  9.38 Mbits/sec  112   39.6 KBytes    
[  5]   9.00-10.00  sec  1.18 MBytes  9.90 Mbits/sec   71   1.41 KBytes    
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  16.4 MBytes  13.8 Mbits/sec  1439             sender
[  5]   0.00-10.00  sec  15.5 MBytes  13.0 Mbits/sec                  receiver

Right after this test finished I executed this command on pve1:
Code:
ethtool -s eno1 speed 1000 duplex full autoneg off

and started the iperf test again. Now it shows this:

Code:
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   113 MBytes   947 Mbits/sec    0    355 KBytes    
[  5]   1.00-2.00   sec   112 MBytes   937 Mbits/sec    0    372 KBytes    
[  5]   2.00-3.00   sec   111 MBytes   934 Mbits/sec    0    372 KBytes    
[  5]   3.00-4.00   sec   111 MBytes   932 Mbits/sec    0    372 KBytes    
[  5]   4.00-5.00   sec   112 MBytes   937 Mbits/sec    0    390 KBytes    
[  5]   5.00-6.00   sec   111 MBytes   931 Mbits/sec    0    390 KBytes    
[  5]   6.00-7.00   sec   112 MBytes   936 Mbits/sec    0    390 KBytes    
[  5]   7.00-8.00   sec   111 MBytes   935 Mbits/sec    0    390 KBytes    
[  5]   8.00-9.00   sec   112 MBytes   936 Mbits/sec    0    390 KBytes    
[  5]   9.00-10.00  sec   111 MBytes   929 Mbits/sec    0    390 KBytes    
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.09 GBytes   935 Mbits/sec    0             sender
[  5]   0.00-10.00  sec  1.09 GBytes   934 Mbits/sec                  receiver

So the 1Gbs is possible. But its not persistent. If I now wait some time, or restart one of the hosts, or as mentioned start a VM migration the 1Gbs speed is gone.

Edit: I ran the test again ca. 5 minutes after the 1Gbs Test was succesful and we are back to slow 100Mbits speed without changing anaything:

Code:
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  4.75 MBytes  39.8 Mbits/sec  264   36.8 KBytes     
[  5]   1.00-2.00   sec  1.12 MBytes  9.38 Mbits/sec   99   55.1 KBytes     
[  5]   2.00-3.00   sec  1.49 MBytes  12.5 Mbits/sec  146   48.1 KBytes     
[  5]   3.00-4.00   sec  1.12 MBytes  9.38 Mbits/sec  135   48.1 KBytes     
[  5]   4.00-5.00   sec  1.49 MBytes  12.5 Mbits/sec  136   48.1 KBytes     
[  5]   5.00-6.00   sec  1.12 MBytes  9.38 Mbits/sec  103   1.41 KBytes     
[  5]   6.00-7.00   sec  1.12 MBytes  9.38 Mbits/sec  121   53.7 KBytes     
[  5]   7.00-8.00   sec  1.49 MBytes  12.5 Mbits/sec  143   45.2 KBytes     
[  5]   8.00-9.00   sec  1.49 MBytes  12.5 Mbits/sec  162   41.0 KBytes     
[  5]   9.00-10.00  sec   764 KBytes  6.25 Mbits/sec   95   1.41 KBytes     
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  15.9 MBytes  13.4 Mbits/sec  1404             sender
[  5]   0.00-10.00  sec  15.2 MBytes  12.8 Mbits/sec                  receiver
 
Last edited:
Check you network side (switch, cables, etc.)

You can force the speed to GBit and inspect what then fails:

Code:
ethtool -s eno1 autoneg on speed 1000 duplex full

I tried forcing the speed to GBit, but no luck.
Figured out that the HP Slice pc I'm using has a faulty network card and will only do 100 MBit.
I have about 15 HP Slices, so I swapped the SSD to another one and now i have GBit speeds, Yay !

Also the other PC i have was getting GBit speeds, just the QNAP is stuck at 100 MBit.
Probably a dodgy network cord.

Thanks for the quick reply :)