Slow Internet Performance

gregg098

Well-Known Member
Apr 5, 2018
38
5
48
43
I'm having an issue in two different Proxmox machines where my internet speeds are very slow on containers and KVMs. My internet is 250/10 through Comcast. Both machines are connected directly to my switch which is connected directly to my EdgeRouter4. This switch is where all of my other non-Proxmox devices are connected.

The first machine is a Dell T30 with 40GB of RAM, host and VMs installed on an SSD (LVM) and then some bulk storage as ZFS RAID on a couple of 8GB hard drives. The NIC is gigabit. I have 1 Ubuntu Server container, 1 Ubuntu Server KVM w/ Docker, and 1 Windows 10 KVM. KVMs are Virtio SCSI. Windows KVM is optimized via the Proxmox guides.

The second machine is an Intel NUC (Celeron) with 3 or 4 Ubuntu containers and 1 Ubuntu Server KVM w/ Docker. Gigabit NIC. It has 8GB of RAM and a 250 GB SSD on it. KVM is Virtio SCSI.

All other computers, phones, etc. in my network get ~275MBps down and ~10 Mbps up.

On both Proxmox machines, I see mixed performance.

On the Proxmox hosts, via speedtest-cli, I get full upload/download. In every single Linux KVM or container, I get full download, but I max out at ~3.15 Mbps upload. Its almost the same value in both machines which seems like something is limiting this. In the Windows 10 KVM, I still get full download speed, but I max out around 9 Mbps upload.

I have read through a bunch of similar threads on here and tried a number of things including:
- Setting CPU to host
- More resources
- Tried different network card types and multiqueue
- Made sure the firewall is completely disabled
- Disconnected everything else from the network and shutdown all other containers/KVMs to do a test
- etc.

As a side note, I tried gigabit for awhile a few months ago, but did not do thorough testing. On my regular computer, is got ~950 Mbps down and ~35 Mbps up, which is where it should be. On my Proxmox machine, the only one I tested at the time was my Windows 10 KVM and I got ~450 Mbps down and ~35 Mbps up. I did not test from the other KVMs and containers. I would like to go back to gigabit, but only if I can get my machines to take advantage of the bandwidth.

Any other thoughts here?
 
  • Like
Reactions: paulpat
Hello there,

You should run the test at the modem (e.g edge), 10mbps doesn't leave enough room for a lot upstream traffic, however ISPs tend to traffic shape based on routes, protocol, ports, etc. Use Google, Netflix, some third part speed test servers to get a more accurate reading. For internal networking, use iperf in TCP and UDP mode, to test your internal network through put.

Cheers
 
Hello there,

You should run the test at the modem (e.g edge), 10mbps doesn't leave enough room for a lot upstream traffic, however ISPs tend to traffic shape based on routes, protocol, ports, etc. Use Google, Netflix, some third part speed test servers to get a more accurate reading. For internal networking, use iperf in TCP and UDP mode, to test your internal network through put.

Cheers

I'll try out iperf, but as far as internet performance goes, there is almost no other network activity when I test with speedtest-cli. I get a very consistent 10 Mbps up from other computers and both Proxmox hosts every time (+/- 0.5 Mbps). Every single test I do on either machine inside a container or KVM results in almost exactly 3.15 Mbps up, with the exception of my Windows 10 KVM which gets just about the full 10 Mbps.

Is there anything else that would cut my up load speed to a third?
 
If you already verified that, do the iperf test between CT/VM and other machines not on the same host and report back the results.

Cheers
 
Also run the test from the hypervisor as well.

All tests between different Proxmox machines. I tried multiple combinations of KVMs and CTs. Results were very similar so Im giving the average result of each combination.

KVM to KVM: ~745 Mbps both ways
CT to KVM: ~940 Mbps both ways
Host to Host: ~940 Mbps both ways

After each test, I did a speedtest-cli and got the same results as in the OP (~250-275 Mbps down and almost exactly 3.15 Mbps up in the CTs and KVMs, and 8-10 Mbps on each host).

CPU usage, IO, etc. all look fine when I run speedtests. I have no idea whats bottle necking this on two very different machines. I can verify no other upload activity taking place. I ran speed tests from my Chromebook on wifi around these times and always got ~10 Mbps up each time.

Any other thoughts?
 
All tests between different Proxmox machines. I tried multiple combinations of KVMs and CTs. Results were very similar so Im giving the average result of each combination.

With iperf?

I have no idea whats bottle necking

The problem lies else where. I like wireshark, monitor traffic both UDP and TCP (TCP is slower than UDP because of handshake) and while testing look for irregularities. Btw, speedtesst-cli could be just unreliable and you believing in the result. How about put a file on server and download it over the internet.
 
Also run the test from the hypervisor as well.
On my one Proxmox machine, the one KVM is Ubuntu Server 16.04 that I use to run a few Docker containers. With speedtest-cli, I got what I posted before, ~3.15 Mbps upload. One of the containers has a private internet access connection. I did a docker exec -it into that container, verified the public IP address was that of my VPN, installed speedtest-cli and ran it. I got 8.3 Mbps up this time.

I immediately ran it on the same machine and got 3.18 Mbps up. Now I'm extra confused.
 
I think the problem is the age of Ubuntu kernel 16.04, can you try a more modern kernel? Did you assign Virtio Nics to the VM? If so, replace them with intel ones, does that make a difference?
 
I think the problem is the age of Ubuntu kernel 16.04, can you try a more modern kernel? Did you assign Virtio Nics to the VM? If so, replace them with intel ones, does that make a difference?
I'll give that a shot. I did try different types of NICs and didn't see a difference.
 
This is anecdotal, please paste top and uptime output on the hypervisor when run the test. After some thought, the problem could just be the route. You said EgdeRouter4?
 
I think the problem is the age of Ubuntu kernel 16.04, can you try a more modern kernel? Did you assign Virtio Nics to the VM? If so, replace them with intel ones, does that make a difference?

I went into my Plex container based on Ubuntu 18.04 and got 3.16 Mbps upload.
I downloaded the Ubuntu 19.04 container template, spun up a server, updated, ran speedtest and got 11+ Mbps.
Then, I tried creating a new Ubutnu 16.04 container. I didn't even update anything, I just logged in, installed speedtest-cli, and I got the same result (~11 Mbps).
Then I logged into random existing KVMs and CTs on each Proxmox host and got the 3.15ish result again.

So there is something weird with my existing containers and KVMs, but I dont know what.
 
Also

Code:
ip l


root@t30:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s31f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
link/ether d8:9e:f3:32:b1:11 brd ff:ff:ff:ff:ff:ff
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether d8:9e:f3:32:b1:11 brd ff:ff:ff:ff:ff:ff
inet 192.168.20.30/24 brd 192.168.20.255 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::da9e:f3ff:fe32:b111/64 scope link
valid_lft forever preferred_lft forever
6: veth102i0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether fe:3e:a3:b8:0b:5e brd ff:ff:ff:ff:ff:ff link-netnsid 0
8: veth106i0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether fe:c5:62:57:c8:bc brd ff:ff:ff:ff:ff:ff link-netnsid 1
10: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UNKNOWN group default qlen 1000
link/ether 9a:bf:55:27:d4:e3 brd ff:ff:ff:ff:ff:ff
11: tap300i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UNKNOWN group default qlen 1000
link/ether be:39:1c:26:da:e8 brd ff:ff:ff:ff:ff:ff

root@t30:~# ip r
default via 192.168.20.1 dev vmbr0 onlink
192.168.20.0/24 dev vmbr0 proto kernel scope link src 192.168.20.30


root@t30:~# ip l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp0s31f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP mode DEFAULT group default qlen 1000
link/ether d8:9e:f3:32:b1:11 brd ff:ff:ff:ff:ff:ff
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether d8:9e:f3:32:b1:11 brd ff:ff:ff:ff:ff:ff
6: veth102i0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
link/ether fe:3e:a3:b8:0b:5e brd ff:ff:ff:ff:ff:ff link-netnsid 0
8: veth106i0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
link/ether fe:c5:62:57:c8:bc brd ff:ff:ff:ff:ff:ff link-netnsid 1
10: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UNKNOWN mode DEFAULT group default qlen 1000
link/ether 9a:bf:55:27:d4:e3 brd ff:ff:ff:ff:ff:ff
11: tap300i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UNKNOWN mode DEFAULT group default qlen 1000
link/ether be:39:1c:26:da:e8 brd ff:ff:ff:ff:ff:ff
 
This is anecdotal, please paste top and uptime output on the hypervisor when run the test. After some thought, the problem could just be the route. You said EgdeRouter4?
Yes. Motorola MB8600 -> EdgeRouter4 -> TP-Link TL-SG1016DE. My whole network is connected through that switch.

Here is the output
root@t30:~# top
top - 19:50:49 up 10:20, 2 users, load average: 0.36, 0.37, 0.30
Tasks: 334 total, 1 running, 233 sleeping, 0 stopped, 0 zombie
%Cpu(s): 2.9 us, 1.3 sy, 0.0 ni, 95.6 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 41061908 total, 18851976 free, 20959304 used, 1250628 buff/cache
KiB Swap: 8388604 total, 8388604 free, 0 used. 19569008 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2126 root 20 0 10.957g 0.010t 9176 S 10.6 25.6 70:55.88 kvm
2877 root 20 0 12.906g 6.522g 9168 S 8.6 16.7 87:35.81 kvm
8 root 20 0 0 0 0 I 0.3 0.0 0:05.92 rcu_sched
2918 root 20 0 2664680 35316 5744 S 0.3 0.1 0:19.75 ganesha.nfsd
3986 110 35 15 1791224 51996 10472 S 0.3 0.1 0:53.07 Plex Script Hos
10859 root 20 0 0 0 0 S 0.3 0.0 0:02.52 vhost-2126
10862 root 20 0 0 0 0 S 0.3 0.0 0:02.32 vhost-2126
10981 root 20 0 0 0 0 S 0.3 0.0 0:13.67 vhost-2877
21913 www-data 20 0 577564 125228 12624 S 0.3 0.3 0:00.98 pveproxy worker
1 root 20 0 57532 7316 5320 S 0.0 0.0 0:02.79 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.33 kthreadd
4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/0:0H
6 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 mm_percpu_wq
7 root 20 0 0 0 0 S 0.0 0.0 0:00.53 ksoftirqd/0
9 root 20 0 0 0 0 I 0.0 0.0 0:00.00 rcu_bh
10 root rt 0 0 0 0 S 0.0 0.0 0:00.03 migration/0
11 root rt 0 0 0 0 S 0.0 0.0 0:00.08 watchdog/0
12 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/0
13 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/1
14 root rt 0 0 0 0 S 0.0 0.0 0:00.08 watchdog/1
15 root rt 0 0 0 0 S 0.0 0.0 0:00.03 migration/1
16 root 20 0 0 0 0 S 0.0 0.0 0:00.56 ksoftirqd/1
18 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/1:0H
19 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/2
20 root rt 0 0 0 0 S 0.0 0.0 0:00.08 watchdog/2
21 root rt 0 0 0 0 S 0.0 0.0 0:00.02 migration/2
22 root 20 0 0 0 0 S 0.0 0.0 0:02.95 ksoftirqd/2
24 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/2:0H
25 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/3
26 root rt 0 0 0 0 S 0.0 0.0 0:00.08 watchdog/3
27 root rt 0 0 0 0 S 0.0 0.0 0:00.02 migration/3
root@t30:~# uptime
19:50:51 up 10:20, 2 users, load average: 0.33, 0.36, 0.30
 
You might hit a bug but it needs to be reproducible with steps?
I dont know what the steps would be. On one machine, I did reinstall proxmox a few months back and restarte my CTs and KVMs. On the other machine, its a fresh install from maybe 2ish years ago. Everything is kept updated on at least a monthly basis via dist-upgrade.
 
You might hit a bug but it needs to be reproducible with steps?
I also just spun up an Ubuntu 16.04 container on my second Proxmox machine, logged in, installed speedtest-cli, ran it, and got 11ish Mbps. I didnt update anything.

So two separate machines are seeing this behavior with very different hardware. New containers work fine, old ones don't.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!