Proxmox Backup Server Kernel 6.8.12-9 download speed problem

ceca

New Member
Feb 27, 2025
1
0
1
Hello,

I just upgraded the VM proxmox backup server I have running in my cluster, to non-subscription repositories and with the latest kernel 6.8.12-9 the download speed is ridiculous.

The upload speed is correct, but the ! download is terrible !

Speed example with the kernel 6.8.12-4:


root@pbs:~# uname -a
Linux pbs 6.8.12-4-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-4 (2024-11-06T15:04Z) x86_64 GNU/Linux
root@pbs:~#
root@pbs:~# wget https://proof.ovh.net/files/1Gb.dat
--2025-03-25 21:28:47-- https://proof.ovh.net/files/1Gb.dat
Resolving proof.ovh.net (proof.ovh.net)... 141.95.207.211, 2001:41d0:242:d300::
Connecting to proof.ovh.net (proof.ovh.net)|141.95.207.211|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1073741824 (1.0G) [application/octet-stream]
Saving to: ‘1Gb.dat.1’

1Gb.dat.1 25%[==========================================> ] 257.19M 49.9MB/s eta 15s


TEST with Iperf3:
root@pbs:~# iperf3 -c 10.100.20.1 -R
Connecting to host 10.100.20.1, port 5201
Reverse mode, remote host 10.100.20.1 is sending
[ 5] local 10.0.201.52 port 36972 connected to 10.100.20.1 port 5201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 1.09 GBytes 9.39 Gbits/sec
[ 5] 1.00-2.00 sec 1.09 GBytes 9.39 Gbits/sec
[ 5] 2.00-3.00 sec 1.09 GBytes 9.39 Gbits/sec
[ 5] 3.00-4.00 sec 1.09 GBytes 9.39 Gbits/sec
[ 5] 4.00-5.00 sec 1.09 GBytes 9.39 Gbits/sec
[ 5] 5.00-6.00 sec 1.09 GBytes 9.39 Gbits/sec
[ 5] 6.00-7.00 sec 1.09 GBytes 9.39 Gbits/sec
[ 5] 7.00-8.00 sec 1.09 GBytes 9.39 Gbits/sec
[ 5] 8.00-9.00 sec 1.09 GBytes 9.39 Gbits/sec
[ 5] 9.00-10.00 sec 1.09 GBytes 9.39 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 10.9 GBytes 9.39 Gbits/sec 0 sender
[ 5] 0.00-10.00 sec 10.9 GBytes 9.39 Gbits/sec receiver

iperf Done.


Speed example with the kernel 6.8.12-9:
root@pbs:~# uname -a
Linux pbs 6.8.12-9-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-9 (2025-03-16T19:18Z) x86_64 GNU/Linux
root@pbs:~#
root@pbs:~# wget https://proof.ovh.net/files/1Gb.dat
--2025-03-25 21:37:49-- https://proof.ovh.net/files/1Gb.dat
Resolving proof.ovh.net (proof.ovh.net)... 141.95.207.211, 2001:41d0:242:d300::
Connecting to proof.ovh.net (proof.ovh.net)|141.95.207.211|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1073741824 (1.0G) [application/octet-stream]
Saving to: ‘1Gb.dat’

1Gb.dat 0%[ ] 359.75K 71.2KB/s eta 4h 8m


TEST with Iperf3:

root@pbs:~# iperf3 -c 10.100.20.1 -R
Connecting to host 10.100.20.1, port 5201
Reverse mode, remote host 10.100.20.1 is sending
[ 5] local 10.0.201.52 port 36880 connected to 10.100.20.1 port 5201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 590 KBytes 4.83 Mbits/sec
[ 5] 1.00-2.00 sec 4.23 MBytes 35.5 Mbits/sec
[ 5] 2.00-3.00 sec 4.94 MBytes 41.4 Mbits/sec
[ 5] 3.00-4.00 sec 2.13 MBytes 17.8 Mbits/sec
[ 5] 4.00-5.00 sec 2.20 MBytes 18.4 Mbits/sec
[ 5] 5.00-6.00 sec 962 KBytes 7.88 Mbits/sec
[ 5] 6.00-7.00 sec 2.12 MBytes 17.8 Mbits/sec
[ 5] 7.00-8.00 sec 5.29 MBytes 44.4 Mbits/sec
[ 5] 8.00-9.00 sec 2.66 MBytes 22.3 Mbits/sec
[ 5] 9.00-10.00 sec 2.67 MBytes 22.4 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 27.8 MBytes 23.3 Mbits/sec 4268 sender
[ 5] 0.00-10.00 sec 27.7 MBytes 23.3 Mbits/sec receiver
 
We reverted to kernel version 6.8.12-8-pve after observing issues with version 6.8.12-9-pve across our server fleet at OVH (validated on multiple Advance STOR-1 Gen 2 servers):

proxmox-boot-tool kernel pin 6.8.12-8-pve
 
We reverted to kernel version 6.8.12-8-pve after observing issues with version 6.8.12-9-pve across our server fleet at OVH (validated on multiple Advance STOR-1 Gen 2 servers):
could you also try if the opt-in kernel 6.14 is affected?
(`apt install proxmox-kernel-6.14` )

What NIC model ?
This information would indeed be helpful (and which driver is used)
 
We haven't opt'ed in yet, but will give it a try over the weekend.

As for NIC related details, please see below:

Code:
# ethtool -i enp10s0f0np0
driver: ice
version: 6.8.12-8-pve
firmware-version: 4.20 0x8001b91f 1.3346.0
expansion-rom-version: 
bus-info: 0000:0a:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

Code:
# ethtool -i enp10s0f1np1
driver: ice
version: 6.8.12-8-pve
firmware-version: 4.20 0x8001b91f 1.3346.0
expansion-rom-version:
bus-info: 0000:0a:00.1
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

Code:
# lshw -C network

  *-network:0
 description: Ethernet interface
 product: Ethernet Controller E810-XXV for SFP
 vendor: Intel Corporation
 physical id: 0
 bus info: pci@0000:0a:00.0
 logical name: enp10s0f0np0
 logical name: /dev/fb0
 version: 02
 serial: 04:7c:16:f1:18:d6
 capacity: 25Gbit/s
 width: 64 bits
 clock: 33MHz
 capabilities: pm msi msix pciexpress vpd bus_master cap_list rom ethernet physical fibre 1000bt-fd 25000bt-fd autonegotiation fb
 configuration: autonegotiation=off broadcast=yes depth=32 driver=ice driverversion=6.8.12-8-pve duplex=full firmware=4.20 0x8001b91f 1.3346.0 latency=0 link=yes mode=1920x1200 multicast=yes slave=yes visual=truecolor xres=1920 yres=1200
       resources: iomemory:fc0-fbf iomemory:fc0-fbf irq:24 memory:fcfa000000-fcfbffffff memory:fcfe010000-fcfe01ffff memory:f5300000-f53fffff memory:fcfd000000-fcfdffffff memory:fcfe220000-fcfe41ffff
 *-network:1
 description: Ethernet interface
 product: Ethernet Controller E810-XXV for SFP
 vendor: Intel Corporation
 physical id: 0.1
 bus info: pci@0000:0a:00.1
 logical name: enp10s0f1np1
 version: 02
 serial: 04:7c:16:f1:18:d6
 capacity: 25Gbit/s
 width: 64 bits
 clock: 33MHz
 capabilities: pm msi msix pciexpress vpd bus_master cap_list rom ethernet physical fibre 1000bt-fd 25000bt-fd autonegotiation
 configuration: autonegotiation=off broadcast=yes driver=ice driverversion=6.8.12-8-pve duplex=full firmware=4.20 0x8001b91f 1.3346.0 latency=0 link=yes multicast=yes slave=yes
resources: iomemory:fc0-fbf iomemory:fc0-fbf irq:24 memory:fcf8000000-fcf9ffffff memory:fcfe000000-fcfe00ffff memory:f5200000-f52fffff memory:fcfc000000-fcfcffffff memory:fcfe020000-fcfe21ffff


Hope this helps!