Very, very slow (~800 KB/s) download-connections from Proxmox-Server, upload is OK

Feb 27, 2020
15
0
21
49
Germany
Hello,

I have a big problem with my download speed from my Proxmox-Server. The server has a quite good internet-connection in a data-center. The upload from my PC to the server is with the expected speed, but the download is very slow with approx. 800 KB/sec.

Running speedtest on the server shows the expected result:

Code:
root@pve:~# speedtest
Retrieving speedtest.net configuration...
Testing from (XX.XXX.XXX.XXX)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by Mobile Breitbandnetze GmbH (Freisbach) [52.73 km]: 5.776 ms
Testing download speed................................................................................
Download: 276.63 Mbit/s
Testing upload speed......................................................................................................
Upload: 264.27 Mbit/s

so there should be enough speed.

When uploading a 270 MB file to the server I got this (my local upload is just this slow, so it is expected):

Code:
joerg@flummi:~/tmp$ scp /home/joerg/tmp/testfile.bmp root@server:/root/testfile.bmp
testfile.bmp                                                              55%  145MB   3.6MB/s   00:32 ETA

... but downloading from the server I got this extremly slow connection (my local download should be as fast as approx 35 MB/sec):

Code:
joerg@flummi:~/tmp$ scp root@server:/root/testfile.bmp /home/joerg/temp/testfile.bmp
testfile.bmp                                                               3%   10MB [B]813.9KB/s [/B]  05:16 ETA

Running this from 2 other places with different providers I get the same low speed. I tried it with iperf3, scp, winscp, filezilla, http-download - every time with the same result. Running this test in a debian LXC and debian VM has the same result. Switching the firewall on and off has no effect.

The CPU is a Intel Xeon Silver 4208 CPU, and top/htop does not show any relevant cpu usage while transferring the file - and even if so, it should be not so slow.

The network configuration on the server is standard:

Code:
root@pve:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

auto vmbr0
iface vmbr0 inet static
        address xx.xxx.xxx.xxx/24
        gateway xx.xxx.xxx.x
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0

The network card is a 10G card:

Code:
67:00.0 Ethernet controller: Intel Corporation Ethernet Connection X722 for 10GBASE-T (rev 09)
67:00.1 Ethernet controller: Intel Corporation Ethernet Connection X722 for 10GBASE-T (rev 09)

Connection seems fine

Code:
root@pve:~# ethtool eno1
Settings for eno1:
        Supported ports: [ TP ]
        Supported link modes:   1000baseT/Full
                                10000baseT/Full
        Supported pause frame use: Symmetric Receive-only
        Supports auto-negotiation: Yes
        Supported FEC modes: Not reported
        Advertised link modes:  1000baseT/Full
                                10000baseT/Full
        Advertised pause frame use: No
        Advertised auto-negotiation: Yes
        Advertised FEC modes: Not reported
        Speed: 1000Mb/s
        Duplex: Full
        Auto-negotiation: on
        Port: Twisted Pair
        PHYAD: 0
        Transceiver: internal
        MDI-X: Unknown
        Supports Wake-on: g
        Wake-on: g
        Current message level: 0x00000007 (7)
                               drv probe link
        Link detected: yes

Any help or idea is welcome, if you need more information, please just tell me.
 
Last edited:
Thanks for your reply.

I tried the iperf3 in both directions. Once with the -R option on my client (iperf3 -c IP-adress -R) and once with my PC as a server (iperf3 -s) and the remote server as a client - same result. 800 KB/s from remote to local vs. 3,5 MB from local to remote.

Something I didn't try until now:
With scp from the remote server to my Hidrive@Strato. The result is nearly the same: The transfer started with 2,5 MB/s and immediatly started reduzing for finally 1,2 MB/s. The other way round I get 40 MB/s downloading from Strato to my server.
 
Did you ask for support from the hosting company, maybe they know about a routing issue ...
 
Yes, I already asked the technican from the hosting company - and he told me, that there is no throttling at the firewall or any other problem. I will ask him again, especially for the possibility of a routing problem.

And I know it worked faster - please don't ask me when it started, it was some time ago and the server ran with Proxmox VE 6.x. Some other guy had to download the server-backups and it took him some time, until he noticed the slow transfer-rate. In the beginning I thought it was a temporary problem.

On the server nothing relevant changed between the normal speed and the lower speed, I only installed the offical updates from proxmox (enterprise repo). The update from Proxmox VE 6 to 7 didn't help.

Thanks again for your fast reply.
 
Try to check the interface statistics, maybe we see some errors on the interface:

Code:
ethtool -S eno1
 
Try to check the interface statistics, maybe we see some errors on the interface:

Code:
ethtool -S eno1

No errors as far as I can say.

Code:
root@pve:~# ethtool -S eno1
NIC statistics:
     rx_packets: 106676956
     tx_packets: 73964129
     rx_bytes: 14798422938
     tx_bytes: 37304589086
     rx_errors: 0
     tx_errors: 0
     rx_dropped: 0
     tx_dropped: 0
     collisions: 0
     rx_length_errors: 0
     rx_crc_errors: 0
     rx_unicast: 61677992
     tx_unicast: 73954477
     rx_multicast: 1131020
     tx_multicast: 9449
     rx_broadcast: 43867898
     tx_broadcast: 201
     rx_unknown_protocol: 0
     tx_linearize: 0
     tx_force_wb: 1
     tx_busy: 0
     rx_alloc_fail: 0
     rx_pg_alloc_fail: 0
     tx-0.packets: 8254475
     tx-0.bytes: 4748367472
     rx-0.packets: 4789593
     rx-0.bytes: 1164567167
     tx-1.packets: 8418460
     tx-1.bytes: 4677929228
     rx-1.packets: 5021036
     rx-1.bytes: 1268597629
     tx-2.packets: 8704809
     tx-2.bytes: 5070836005
     rx-2.packets: 4937932
     rx-2.bytes: 1179646565
     tx-3.packets: 8709278
     tx-3.bytes: 5100238438
     rx-3.packets: 4858934
     rx-3.bytes: 1086906179
     tx-4.packets: 8897104
     tx-4.bytes: 5302494844
     rx-4.packets: 4975891
     rx-4.bytes: 1181572833
     tx-5.packets: 8793563
     tx-5.bytes: 5241360169
     rx-5.packets: 4839915
     rx-5.bytes: 1054631126
     tx-6.packets: 2247034
     tx-6.bytes: 710272956
     rx-6.packets: 47070258
     rx-6.bytes: 3257871228
     tx-7.packets: 2154429
     tx-7.bytes: 661234326
     rx-7.packets: 3331660
     rx-7.bytes: 552711427
     tx-8.packets: 2110631
     tx-8.bytes: 609121626
     rx-8.packets: 3275909
     rx-8.bytes: 569805226
     tx-9.packets: 2512146
     tx-9.bytes: 744912588
     rx-9.packets: 3587128
     rx-9.bytes: 529758555
     tx-10.packets: 2161360
     tx-10.bytes: 708038277
     rx-10.packets: 3286922
     rx-10.bytes: 540077216
     tx-11.packets: 2295346
     tx-11.bytes: 937920803
     rx-11.packets: 3404097
     rx-11.bytes: 530754893
     tx-12.packets: 2348509
     tx-12.bytes: 765219584
     rx-12.packets: 3499830
     rx-12.bytes: 511826874
     tx-13.packets: 2099130
     tx-13.bytes: 639928651
     rx-13.packets: 3266373
     rx-13.bytes: 475366067
     tx-14.packets: 2192949
     tx-14.bytes: 785621433
     rx-14.packets: 3266406
     rx-14.bytes: 450196990
     tx-15.packets: 2064906
     tx-15.bytes: 601092686
     rx-15.packets: 3265072
     rx-15.bytes: 444132963
     veb.rx_bytes: 0
     veb.tx_bytes: 0
     veb.rx_unicast: 0
     veb.tx_unicast: 0
     veb.rx_multicast: 0
     veb.tx_multicast: 0
     veb.rx_broadcast: 0
     veb.tx_broadcast: 0
     veb.rx_discards: 0
     veb.tx_discards: 0
     veb.tx_errors: 0
     veb.rx_unknown_protocol: 0
     veb.tc_0_tx_packets: 0
     veb.tc_0_tx_bytes: 0
     veb.tc_0_rx_packets: 0
     veb.tc_0_rx_bytes: 0
     veb.tc_1_tx_packets: 0
     veb.tc_1_tx_bytes: 0
     veb.tc_1_rx_packets: 0
     veb.tc_1_rx_bytes: 0
     veb.tc_2_tx_packets: 0
     veb.tc_2_tx_bytes: 0
     veb.tc_2_rx_packets: 0
     veb.tc_2_rx_bytes: 0
     veb.tc_3_tx_packets: 0
     veb.tc_3_tx_bytes: 0
     veb.tc_3_rx_packets: 0
     veb.tc_3_rx_bytes: 0
     veb.tc_4_tx_packets: 0
     veb.tc_4_tx_bytes: 0
     veb.tc_4_rx_packets: 0
     veb.tc_4_rx_bytes: 0
     veb.tc_5_tx_packets: 0
     veb.tc_5_tx_bytes: 0
     veb.tc_5_rx_packets: 0
     veb.tc_5_rx_bytes: 0
     veb.tc_6_tx_packets: 0
     veb.tc_6_tx_bytes: 0
     veb.tc_6_rx_packets: 0
     veb.tc_6_rx_bytes: 0
     veb.tc_7_tx_packets: 0
     veb.tc_7_tx_bytes: 0
     veb.tc_7_rx_packets: 0
     veb.tc_7_rx_bytes: 0
     port.rx_bytes: 15173286416
     port.tx_bytes: 37670012174
     port.rx_unicast: 61678022
     port.tx_unicast: 73954477
     port.rx_multicast: 1131020
     port.tx_multicast: 69119
     port.rx_broadcast: 43719521
     port.tx_broadcast: 148789
     port.tx_errors: 0
     port.rx_dropped: 0
     port.tx_dropped_link_down: 0
     port.rx_crc_errors: 0
     port.illegal_bytes: 0
     port.mac_local_faults: 0
     port.mac_remote_faults: 0
     port.tx_timeout: 0
     port.rx_csum_bad: 49374
     port.rx_length_errors: 0
     port.link_xon_rx: 0
     port.link_xoff_rx: 0
     port.link_xon_tx: 0
     port.link_xoff_tx: 0
     port.rx_size_64: 49364504
     port.rx_size_127: 46701201
     port.rx_size_255: 3377831
     port.rx_size_511: 1906488
     port.rx_size_1023: 845235
     port.rx_size_1522: 4333304
     port.rx_size_big: 0
     port.tx_size_64: 1865831
     port.tx_size_127: 44904420
     port.tx_size_255: 3101583
     port.tx_size_511: 1378974
     port.tx_size_1023: 1691006
     port.tx_size_1522: 21230571
     port.tx_size_big: 0
     port.rx_undersize: 0
     port.rx_fragments: 0
     port.rx_oversize: 0
     port.rx_jabber: 0
     port.VF_admin_queue_requests: 0
     port.arq_overflows: 0
     port.tx_hwtstamp_timeouts: 0
     port.rx_hwtstamp_cleared: 0
     port.tx_hwtstamp_skipped: 0
     port.fdir_flush_cnt: 257
     port.fdir_atr_match: 45473515
     port.fdir_atr_tunnel_match: 0
     port.fdir_atr_status: 1
     port.fdir_sb_match: 0
     port.fdir_sb_status: 1
     port.tx_lpi_status: 0
     port.rx_lpi_status: 0
     port.tx_lpi_count: 0
     port.rx_lpi_count: 0
     port.tx_priority_0_xon_tx: 0
     port.tx_priority_0_xoff_tx: 0
     port.rx_priority_0_xon_rx: 0
     port.rx_priority_0_xoff_rx: 0
     port.rx_priority_0_xon_2_xoff: 0
     port.tx_priority_1_xon_tx: 0
     port.tx_priority_1_xoff_tx: 0
     port.rx_priority_1_xon_rx: 0
     port.rx_priority_1_xoff_rx: 0
     port.rx_priority_1_xon_2_xoff: 0
     port.tx_priority_2_xon_tx: 0
     port.tx_priority_2_xoff_tx: 0
     port.rx_priority_2_xon_rx: 0
     port.rx_priority_2_xoff_rx: 0
     port.rx_priority_2_xon_2_xoff: 0
     port.tx_priority_3_xon_tx: 0
     port.tx_priority_3_xoff_tx: 0
     port.rx_priority_3_xon_rx: 0
     port.rx_priority_3_xoff_rx: 0
     port.rx_priority_3_xon_2_xoff: 0
     port.tx_priority_4_xon_tx: 0
     port.tx_priority_4_xoff_tx: 0
     port.rx_priority_4_xon_rx: 0
     port.rx_priority_4_xoff_rx: 0
     port.rx_priority_4_xon_2_xoff: 0
     port.tx_priority_5_xon_tx: 0
     port.tx_priority_5_xoff_tx: 0
     port.rx_priority_5_xon_rx: 0
     port.rx_priority_5_xoff_rx: 0
     port.rx_priority_5_xon_2_xoff: 0
     port.tx_priority_6_xon_tx: 0
     port.tx_priority_6_xoff_tx: 0
     port.rx_priority_6_xon_rx: 0
     port.rx_priority_6_xoff_rx: 0
     port.rx_priority_6_xon_2_xoff: 0
     port.tx_priority_7_xon_tx: 0
     port.tx_priority_7_xoff_tx: 0
     port.rx_priority_7_xon_rx: 0
     port.rx_priority_7_xoff_rx: 0
     port.rx_priority_7_xon_2_xoff: 0
root@pve:~#
 
Yeah that looks pretty error free.

Did you check how much traffic there is on the interface? You can also try to capture some packets with tcpdump (tcpdump -i eno1 -w capture.pcap) and look at it in Wireshark it highlights bad things in red or black (see "view/Coloring rules" in wireshark what exactly gets what color).
 
There is not much traffic - the LXC container is hosting a ISP-Config setting with approx. 20-30 customers. A little bit http-traffic and a little bit mail-traffic.

Your Wireshark-idea I am working on.
 
So, it's the first time I used Wireshark, so myy analysis is not so professional:

I did a iperf3 test in both directions while capturing with tcpdump-command.

Together with the daily traffic it captured 229.342 pakets in total.

Totally 4.488 are marked with black (Info: DUP ACK, Fast Retransmission, Out-Of-Order or Retransmission) and perhaps 200 pakets are red (mainly a [RST] paket for a SSH connection) - 98 out of 16.000 pakets were in the iperf3-transfer to my local pc. I think, this ratio doesn't sound bad, or am I wrong?
 
Last edited:
Does the hoster maybe throttled your download speed because you only got a limited monthly quota and you exceeded that by downloading the backups? I often see such things when looking at hoster products.
 
Does the hoster maybe throttle your download speed because you only got a limited monthly quota and you exceeded that by downloading the backups?
No, I already asked him. He said definitiv no throttling.

I asked him again, to have a look after possible routing problems, but I have no answer till now - think it will take a little more time.
 
Last edited:
This is an overview during an scp-upload using iptraf-ng:

Code:
 iptraf-ng 1.2.1
┌ Iface ────────────────── Total ────────── IPv4 ───────── IPv6 ───────── NonIP ────── BadIP ─────────── Activity ────────────┐
│ eno1                     45724           45524            200               0            0          15373.73 kbps           │
│ fwbr101i0                   10              10              0               0            0              0.00 kbps           │
│ fwbr102i0                   10              10              0               0            0              0.00 kbps           │
│ fwln101i0                22033           21833            200               0            0          10530.37 kbps           │
│ fwln102i0                   23              23              0               0            0              0.06 kbps           │
│ fwpr101p0                22033           21833            200               0            0          10490.74 kbps           │
│ fwpr102p0                   23              23              0               0            0              0.06 kbps           │
│ lo                         230             230              0               0            0              8.34 kbps           │
│ tap101i0                 22034           21834            200               0            0          10530.37 kbps           │
│ tap102i0                     0               0              0               0            0              0.00 kbps           │
│ veth100i0                 3890            3890              0               0            0            104.83 kbps           │
│ vmbr0                    19806           19806              0               0            0           4757.40 kbps           │

while running a speedtest (doesn't matter if running from the pve, a Linux VM or a Windows 10 VM (I now have the credentials, so I can test it there too)) it shows a completely different picture:

Code:
 iptraf-ng 1.2.1
┌ Iface ────────────────── Total ────────── IPv4 ───────── IPv6 ───────── NonIP ────── BadIP ─────────── Activity ────────────┐
│ eno1                    306700          306055            645               0            0         729705.12 kbps           │
│ fwbr101i0                  232              51            181               0            0              0.53 kbps           │
│ fwbr102i0                  232              51            181               0            0              0.53 kbps           │
│ fwln101i0               181497          180852            645               0            0         724782.66 kbps           │
│ fwln102i0                  295             114            181               0            0              0.59 kbps           │
│ fwpr101p0               181498          180853            645               0            0         724782.16 kbps           │
│ fwpr102p0                  295             114            181               0            0              0.59 kbps           │
│ lo                        1178            1178              0               0            0             15.90 kbps           │
│ tap101i0                181452          180807            645               0            0         724782.66 kbps           │
│ tap102i0                   181               0            181               0            0              0.00 kbps           │
│ veth100i0                22377           22196            181               0            0             67.92 kbps           │
│ vmbr0                   103201          103020            181               0            0           4838.35 kbps           │

So, if I am right, a hardware defect is not plausible, are there any other things beside a routing problem, that could lead to such a problem???
 
Last edited:
Two more things:
* Is this a dedicated server or some sort of VM?
* when you say "Switching the firewall on and off has no effect." you mean: you turned the FW off, tested the speed and turned it back on again?
 
Two more things:
* Is this a dedicated server or some sort of VM?
* when you say "Switching the firewall on and off has no effect." you mean: you turned the FW off, tested the speed and turned it back on again?

* Yes ,it is a dedicated server:

PVE-Server.png

(the uptime should not be the problem, since the problem exists for months)

* I did run the tests, then I did a "pve stop firewall" an run the test with the same result, and then a "pve start firewall" - again with the same result. I don't have special rules in the firewall, I just blocked the incoming ssh-port 22 for all except my home ip-address and some more. And this would be a "it works or doesn't".
 
I did the tests between the server and my home and an other place, where I have an accessible linux machine with the same result. I didn't know, that there are public servers. I had a look a your link, most of them gave me an

Code:
iperf3: error - unable to send control message: Bad file descriptor

But with one it worked - and gave me the following results. Once in sending from my server to iperf3, once the other way round.

Code:
root@pve:~# iperf3 -p 5002 -c speedtest.serverius.net
Connecting to host speedtest.serverius.net, port 5002
[  5] local 91.198.238.163 port 43400 connected to 178.21.16.76 port 5002
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  2.01 MBytes  16.8 Mbits/sec   10   24.0 KBytes      
[  5]   1.00-2.00   sec  1.49 MBytes  12.5 Mbits/sec   11   14.1 KBytes      
[  5]   2.00-3.00   sec  1.37 MBytes  11.5 Mbits/sec   10   17.0 KBytes      
[  5]   3.00-4.00   sec  1.86 MBytes  15.6 Mbits/sec    7   21.2 KBytes      
[  5]   4.00-5.00   sec  1.86 MBytes  15.6 Mbits/sec    8   19.8 KBytes      
[  5]   5.00-6.00   sec  1.99 MBytes  16.7 Mbits/sec    6   22.6 KBytes      
[  5]   6.00-7.00   sec  1.49 MBytes  12.5 Mbits/sec   10   11.3 KBytes      
[  5]   7.00-8.00   sec  1.24 MBytes  10.4 Mbits/sec    9   17.0 KBytes      
[  5]   8.00-9.00   sec  1.62 MBytes  13.6 Mbits/sec    9   21.2 KBytes      
[  5]   9.00-10.00  sec  1.74 MBytes  14.6 Mbits/sec    9   11.3 KBytes      
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  16.7 MBytes  14.0 Mbits/sec   89             sender
[  5]   0.00-10.00  sec  16.6 MBytes  13.9 Mbits/sec                  receiver

iperf Done.
root@pve:~# iperf3 -R -p 5002 -c speedtest.serverius.net
Connecting to host speedtest.serverius.net, port 5002
Reverse mode, remote host speedtest.serverius.net is sending
[  5] local 91.198.238.163 port 43404 connected to 178.21.16.76 port 5002
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   106 MBytes   889 Mbits/sec                
[  5]   1.00-2.00   sec   112 MBytes   941 Mbits/sec                
[  5]   2.00-3.00   sec   112 MBytes   941 Mbits/sec                
[  5]   3.00-4.00   sec   112 MBytes   941 Mbits/sec                
[  5]   4.00-5.00   sec   112 MBytes   941 Mbits/sec                
[  5]   5.00-6.00   sec   112 MBytes   941 Mbits/sec                
[  5]   6.00-7.00   sec   112 MBytes   941 Mbits/sec                
[  5]   7.00-8.00   sec   112 MBytes   942 Mbits/sec                
[  5]   8.00-9.00   sec   112 MBytes   941 Mbits/sec                
[  5]   9.00-10.00  sec   112 MBytes   942 Mbits/sec                
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.09 GBytes   939 Mbits/sec    2             sender
[  5]   0.00-10.00  sec  1.09 GBytes   936 Mbits/sec                  receiver

The upload from the server is approx. the double of the rate to my home (I tested it again for my home - again approx. 800 KB/s) - but still miles away from the real download and possible upload speed.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!