[SOLVED] Proxmox stuck at 100 mbps

aditya333

New Member
Mar 13, 2023
9
0
1
Hello folks. A new guy here trying to setup Proxmox.
I have observed that my Proxmox installation only can reach upto 100 Mbits. I had a ubuntu installtion before moving to Proxmox which used to run at full speed of 1000 Mbits. I would really appreciate some help to get this running at gigabit speeds.
Below are few debug commands i've run, gathering from other previous threads. Please let me know if I should more info here.



Code:
Connecting to host 192.168.1.10, port 5201
iperf3 proxmox -> server, my laptop-> client

[  5] local 192.168.1.126 port 63658 connected to 192.168.1.102 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.01   sec  2.03 MBytes  17.0 Mbits/sec                
[  5]   1.01-2.01   sec   465 KBytes  3.81 Mbits/sec                
[  5]   2.01-3.00   sec  1000 KBytes  8.23 Mbits/sec                
[  5]   3.00-4.00   sec  2.33 MBytes  19.6 Mbits/sec                
[  5]   4.00-5.00   sec  3.42 MBytes  28.7 Mbits/sec                
[  5]   5.00-6.00   sec  3.76 MBytes  31.6 Mbits/sec                
[  5]   6.00-7.00   sec  3.68 MBytes  30.8 Mbits/sec                
[  5]   7.00-8.00   sec  3.71 MBytes  31.1 Mbits/sec                
[  5]   8.00-9.00   sec  3.02 MBytes  25.4 Mbits/sec                
[  5]   9.00-10.01  sec  1.42 MBytes  11.8 Mbits/sec                
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.01  sec  24.8 MBytes  20.8 Mbits/sec                  sender
[  5]   0.00-10.09  sec  24.7 MBytes  20.5 Mbits/sec                  receiver

Code:
dmesg |grep -i nic

[    3.723741] systemd[1]: Listening on fsck to fsckd communication Socket.
[    4.565213] softdog: initialized. soft_noboot=0 soft_margin=60 sec soft_panic=0 (nowayout=0)
[   17.376747] igc 0000:56:00.0 enp86s0: NIC Link is Up 100 Mbps Full Duplex, Flow Control: RX/TX

Code:
ethtool enp86s0

Settings for enp86s0:
        Supported ports: [  ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
                                2500baseT/Full
        Supported pause frame use: Symmetric
        Supports auto-negotiation: Yes
        Supported FEC modes: Not reported
        Advertised link modes:  10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
                                2500baseT/Full
        Advertised pause frame use: Symmetric
        Advertised auto-negotiation: Yes
        Advertised FEC modes: Not reported
        Speed: 100Mb/s
        Duplex: Full
        Auto-negotiation: on
        Port: Twisted Pair
        PHYAD: 0
        Transceiver: internal
        MDI-X: off (auto)
        Supports Wake-on: pumbg
        Wake-on: g
        Current message level: 0x00000007 (7)
                               drv probe link
        Link detected: yes
 
Last edited:
Hi,

what version of PVE are you running (pveversion -v)?
Are there any errors/warnings which might be related in the system log (journalctl -b)? You can attach the log here to sift through.

Additionally, the output of lspci -v -s 0000:56:00.0 might be interesting.

I'd also suggest trying to swap out the cable, to see if that has gone bad.
 
Hi @cheiss , Thanks a lot for your reply. I have added the information required below.
I don't think its the cable. I have the same cable running to another Unraid server which can reach 1000 mbps using the same switch/ap between. I have also atached a Cat 6 cable with same results.
In a desperate attempt to solve this, I updated my bios which rename my device from enp86s0 to enp100s0. But that also didn't fix this.
journalctl -b was huge, so I piped only errors and warnings

pveversion -v
Code:
Linux pve 6.1.15-1-pve #1 SMP PREEMPT_DYNAMIC PVE 6.1.15-1 (2023-03-08T08:53Z) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Tue Mar 14 18:26:19 IST 2023 on tty1
root@pve:~# pveversion -v
proxmox-ve: 7.3-1 (running kernel: 6.1.15-1-pve)
pve-manager: 7.3-6 (running version: 7.3-6/723bb6ec)
pve-kernel-6.1: 7.3-6
pve-kernel-helper: 7.3-6
pve-kernel-5.15: 7.3-2
pve-kernel-6.1.15-1-pve: 6.1.15-1
pve-kernel-5.15.85-1-pve: 5.15.85-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
ceph-fuse: 15.2.17-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.3-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-2
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-6
libpve-storage-perl: 7.3-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.3.3-1
proxmox-backup-file-restore: 2.3.3-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.5
pve-cluster: 7.3-2
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20221111-1
pve-firewall: 4.2-7
pve-firmware: 3.6-3
pve-ha-manager: 3.5.1
pve-i18n: 2.8-3
pve-qemu-kvm: 7.2.0-5
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1

journalctl -b | grep -i warn
Code:
Mar 14 18:26:00 pve kernel: x86/split lock detection: #AC: crashing the kernel on kernel split_locks and warning on user-space split_locks

journalctl -b | grep -i error
Code:
Mar 14 18:26:00 pve kernel: RAS: Correctable Errors collector initialized.
Mar 14 18:26:00 pve kernel: Serial bus multi instantiate pseudo device driver INT3515:00: error -ENXIO: IRQ index 1 not found
Mar 14 18:26:00 pve kernel: Serial bus multi instantiate pseudo device driver INT3515:00: error -ENXIO: Error requesting irq at index 1
Mar 14 18:26:00 pve kernel: EDAC igen6 MC1: HANDLING IBECC MEMORY ERROR
Mar 14 18:26:00 pve kernel: EDAC igen6 MC0: HANDLING IBECC MEMORY ERROR
Mar 14 18:26:01 pve kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Mar 14 18:26:01 pve kernel: spi-nor: probe of spi0.0 failed with error -524

lspci -v -s 0000:56:00.0

yields nothing
 
Last edited:
journalctl -b was huge, so I piped only errors and warnings
You can also attach it here as a file if you want.

And to really get all warnings and errors from the syslog (good intution!), one can use journalctl -p4 -b. (As the grep-method you used only catches warnings/errors with the actual string in it - which more often than not, is not the case)

lspci -v -s 0000:56:00.0

yields nothing
I updated my bios which rename my device from enp86s0 to enp100s0
Yeah, that's due to the PCIe bus address having changed after the BIOS update. Should be lspci -v -s 0000:100:00.0 now.

Apart from that, since you are running the newest kernel - can you try booting the "old" 5.15.84 kernel? If that fixes it, it is indeed a driver regression.
 
@cheiss, the lspci commands results in following result:
root@pve:~# lspci -v -s 0000:100:00.0 lspci: -s: Invalid bus number

Also I booted into 5.15.30-1-pve using proxmox-boot-tool pining on the next boot (and I rebooted). the ethtool results in the same output:

Code:
root@pve:~# ethtool enp100s0
Settings for enp100s0:
        Supported ports: [  ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
                                2500baseT/Full
        Supported pause frame use: Symmetric
        Supports auto-negotiation: Yes
        Supported FEC modes: Not reported
        Advertised link modes:  10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
                                2500baseT/Full
        Advertised pause frame use: Symmetric
        Advertised auto-negotiation: Yes
        Advertised FEC modes: Not reported
        Speed: 100Mb/s
        Duplex: Full
        Auto-negotiation: on
        Port: Twisted Pair
        PHYAD: 0
        Transceiver: internal
        MDI-X: off (auto)
        Supports Wake-on: pumbg
        Wake-on: g
        Current message level: 0x00000007 (7)
                               drv probe link
        Link detected: yes
 
@cheiss

This might be the info you are looking for:

Code:
root@pve:~# lspci -v -s 0000:64:00.0
64:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03)
        Subsystem: Intel Corporation Ethernet Controller I225-V
        Flags: bus master, fast devsel, latency 0, IRQ 16, IOMMU group 16
        Memory at 84200000 (32-bit, non-prefetchable) [size=1M]
        Memory at 84300000 (32-bit, non-prefetchable) [size=16K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [70] MSI-X: Enable+ Count=5 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [140] Device Serial Number 48-21-0b-ff-ff-37-b8-d4
        Capabilities: [1c0] Latency Tolerance Reporting
        Capabilities: [1f0] Precision Time Measurement
        Capabilities: [1e0] L1 PM Substates
        Kernel driver in use: igc
        Kernel modules: igc

I think this gives out some useful information but I am not experienced or knowledgeable enough to understand:

Code:
root@pve:~# lshw -C network
  *-network UNCLAIMED     
       description: Network controller
       product: Intel Corporation
       vendor: Intel Corporation
       physical id: 14.3
       bus info: pci@0000:00:14.3
       version: 01
       width: 64 bits
       clock: 33MHz
       capabilities: pm msi pciexpress msix cap_list
       configuration: latency=0
       resources: iomemory:600-5ff memory:603d1ac000-603d1affff
  *-network
       description: Ethernet interface
       product: Ethernet Controller I225-V
       vendor: Intel Corporation
       physical id: 0
       bus info: pci@0000:64:00.0
       logical name: enp100s0
       version: 03
       serial: 48:21:0b:37:b8:d4
       size: 100Mbit/s
       capacity: 1Gbit/s
       width: 32 bits
       clock: 33MHz
       capabilities: pm msi msix pciexpress bus_master cap_list ethernet physical 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
       configuration: autonegotiation=on broadcast=yes driver=igc driverversion=5.15.85-1-pve duplex=full firmware=1085:8770 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s
       resources: irq:16 memory:84200000-842fffff memory:84300000-84303fff
  *-network
       description: Ethernet interface
       physical id: 1
       logical name: vmbr0
       serial: 48:21:0b:37:b8:d4
       size: 100Mbit/s
       capabilities: ethernet physical
       configuration: autonegotiation=off broadcast=yes driver=bridge driverversion=2.3 firmware=N/A ip=192.168.1.102 link=yes multicast=yes speed=100Mbit/s
 
Last edited:
this was hell! However I was able to figure out the issue.

I was under the impression that Intel NICs are better when it comes to software and driver support. This experience has changed my view-point.
I am able to get 1000 Mbps when using a different switch/ Wifi AP. The weird thing is, my other server which has some Realtek NICs works flawlessly with the same switch but the intel I225-v doesn't! I use OpenWrt in all my switch, APs, routers. This was no different.

Scenerio:

I was using Archer C6 with OpenWrt. This was not able to negotiate 1000 mbps speed with intel i225-v. However, my Realtek Onboard NIC on my other UnRaid server work with no issues.
When I switched to my old Archer C7 with same version of OpenWrt installed, it worked! I love my archer C7!
Don't know where the problem lies. With intel NIC hardware or drivers of intel NICs in the Linux Kernel.

@cheiss , if you are interested.
 
Glad you have solved this!
Please just finally mark the thread as SOLVED by editing the first post (there should be a dropdown near the title), so others with the same problem can find it more easily! Thanks! :)

I was under the impression that Intel NICs are better when it comes to software and driver support.
This was my impression too until now.

The weird thing is, my other server which has some Realtek NICs works flawlessly with the same switch but the intel I225-v doesn't!
If I understand correctly, so Realtek NIC <-> Switch negotiates 1Gbit/s just fine, but I225-v <-> Switch does not? That is indeed very weird.

Don't know where the problem lies. With intel NIC hardware or drivers of intel NICs in the Linux Kernel.
I guess this is some sort of firmware or driver issue, if it worked fine with Ubuntu before.

If you want to "experiment" further, you could try the older LTS kernel (5.15.z) or the newest opt-in kernel (6.2), seeing if they fix things. If one of them does, there really was a regression somewhere in between.
 
If I understand correctly, so Realtek NIC <-> Switch negotiates 1Gbit/s just fine, but I225-v <-> Switch does not? That is indeed very weird.
Yes. That's right.

If you want to "experiment" further, you could try the older LTS kernel (5.15.z) or the newest opt-in kernel (6.2), seeing if they fix things. If one of them does, there really was a regression somewhere in between.
I tried with 6.1.15-1-pve and 5.15.85-1-pve. The same issue. I might try to check 6.2 since I am trying to work with cpu pinning with Alder Lake. But this is a homelab setup kind of affair. So I might not get time to experiment right away. I might in future an post results here.
 
I have the same issue on proxmox ve and proxmox backup server
Intel nics do wrong auto negotiation, realtek and broadcom are ok

the cables are ok (they work in other scenarios)
the switch is cheap but ok (other servers do not have a problem to reach 1gbs)
the nics are ok (they do 1gbs under other OS)

this problem is now critical for me (I have already swapped some of the nics for broadcom, realtek, etc...)
should i try the 6.2 kernel?
 
Last edited:
OK - the 6.2 kernel is a nice one - but it doesn't help
I'm running 20 proxmox ve or pbs and only some of them are affected by this bug.
Next i will fiddle around with the switch (all affected systems are on one switch)
the switch is fine and ok (i can achieve 1gbs on the same ports but with other OS)

And it affects in my case only NICs with (different servers):
Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
 
the bug is related to some powersave functions (EEE or green ethernet etc.) on the switch with the combination of linux kernel

found some kernel bug reports
-> solved
 
the bug is related to some powersave functions (EEE or green ethernet etc.) on the switch with the combination of linux kernel

found some kernel bug reports
-> solved
Thanks for reporting back!

Wouldn't have guessed that (esp. that it's the switch at fault), glad you could solve this in the end!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!