LAN network speeds are fine, but Proxmox and VM's have slow internet speeds.

So after all kinds of tests, updates, etc I just switched ports on my switch and it worked. I then changed it back to the original and now the host is transferring at full Gigabit speeds. I don't get it. Brand new cable, link was reading gigabit on both sides for weeks and everything...
 
Proxmox 7.3

I'm having a very similar (if not the same) problem.

On the host, using the physical interface, iperf connected to another external node, delivers 1Gbps (942Mbps). Correct.

In the virtual machine, using the VirtIO interface, connected to vmbr0, Iperf delivers 400~500Mbps. Processor utilization fluctuates around 10~20% while using iperf. This, traveling through the same physical interface as the Host delivers 1Gbps.

Code:
root@ti-01:/home/adriano# iperf -c 172.28.1.21
------------------------------------------------------------
Client connecting to 172.28.1.21, TCP port 5001
TCP window size:  978 KByte (default)
------------------------------------------------------------
[  3] local 172.28.1.203 port 35522 connected with 172.28.1.21 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.10 GBytes   942 Mbits/sec
root@ti-01:/home/adriano# iperf -c 192.168.2.101
------------------------------------------------------------
Client connecting to 192.168.2.101, TCP port 5001
TCP window size:  663 KByte (default)
------------------------------------------------------------
[  3] local 192.168.2.203 port 57428 connected with 192.168.2.101 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   517 MBytes   433 Mbits/sec
root@ti-01:/home/adriano# iperf -c 192.168.2.101
------------------------------------------------------------
Client connecting to 192.168.2.101, TCP port 5001
TCP window size:  654 KByte (default)
------------------------------------------------------------
[  3] local 192.168.2.203 port 41046 connected with 192.168.2.101 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   509 MBytes   427 Mbits/sec
root@ti-01:/home/adriano#

The physical processor is an Intel Xeon X5680. There are actually two processors on the motherboard. It is an HP Dl160 G6 server with 48GB of RAM.

I've already tried to change the virtual processor model from common KVM to "Host", I also tried to indicate the processor model manually, but nothing changed in performance.

I also tried with another host here, an IBM x3550 M2 with two Intel X5670 processors. Identical setup.

The experience was the same. Total performance when the iperf points to the host's physical interface, but the performance drops by half when the iperf is directed to the virtual machine address.

Only one virtual machine is currently running.

What is wrong?
 
To all who experience guest performance trouble- you need to provide relevant data if you have any hope of diagnosis, namely:

1. make and model of the NIC. if you're using a self provided driver, note that as well.
2. /etc/network/interfaces
3. any sysctl tuning (or lack thereof)
4. guest NIC type, OS version, and guest driver version (especially if virtio)

The answer can usually be found in the above.
 
Same problem here. My download speed is limited at 50-100Mbps on my proxmox host and VMs.

I tried to install Ubuntu on the same server and the speed is arround 1000 Mbps. (so its not hardware related)

- ethtool tell me that the negotiated speed for the bridge is 2500 Mbps.
- wget to download a test file give me 2.18 MB/s
- speedtest result : 50-100 Mbps
- I updated the BIOS
- Tried on another kernel
- Deactivated IPv6

I use a Controller I225-V
Code:
64:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 03)

/etc/network/interfaces
Code:
auto lo
iface lo inet loopback

iface enp100s0 inet manual
        post-up ethtool -K enp100s0 tx-checksum-ipv6 off

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.100/24
        gateway 192.168.1.254
        bridge-ports enp100s0
        bridge-stp off
        bridge-fd 0

Code:
root@pve:~# sysctl -a | grep -E "net.ipv4.tcp|net.ipv4.tcp_syn|net.ipv4.tcp_fin_timeout|net.ipv4.tcp_tw_reuse"
net.ipv4.tcp_abort_on_overflow = 0
net.ipv4.tcp_adv_win_scale = 1
net.ipv4.tcp_allowed_congestion_control = reno cubic
net.ipv4.tcp_app_win = 31
net.ipv4.tcp_autocorking = 1
net.ipv4.tcp_available_congestion_control = reno cubic
net.ipv4.tcp_available_ulp = espintcp mptcp tls
net.ipv4.tcp_base_mss = 1024
net.ipv4.tcp_challenge_ack_limit = 1000
net.ipv4.tcp_comp_sack_delay_ns = 1000000
net.ipv4.tcp_comp_sack_nr = 44
net.ipv4.tcp_comp_sack_slack_ns = 100000
net.ipv4.tcp_congestion_control = cubic
net.ipv4.tcp_dsack = 1
net.ipv4.tcp_early_demux = 1
net.ipv4.tcp_early_retrans = 3
net.ipv4.tcp_ecn = 2
net.ipv4.tcp_ecn_fallback = 1
net.ipv4.tcp_fack = 0
net.ipv4.tcp_fastopen = 1
net.ipv4.tcp_fastopen_blackhole_timeout_sec = 0
net.ipv4.tcp_fastopen_key = 00000000-00000000-00000000-00000000
net.ipv4.tcp_fin_timeout = 60
net.ipv4.tcp_frto = 2
net.ipv4.tcp_fwmark_accept = 0
net.ipv4.tcp_invalid_ratelimit = 500
net.ipv4.tcp_keepalive_intvl = 75
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_time = 7200
net.ipv4.tcp_l3mdev_accept = 0
net.ipv4.tcp_limit_output_bytes = 1048576
net.ipv4.tcp_low_latency = 0
net.ipv4.tcp_max_orphans = 65536
net.ipv4.tcp_max_reordering = 300
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_max_tw_buckets = 65536
net.ipv4.tcp_mem = 184788       246387  369576
net.ipv4.tcp_migrate_req = 0
net.ipv4.tcp_min_rtt_wlen = 300
net.ipv4.tcp_min_snd_mss = 48
net.ipv4.tcp_min_tso_segs = 2
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_mtu_probe_floor = 48
net.ipv4.tcp_mtu_probing = 0
net.ipv4.tcp_no_metrics_save = 0
net.ipv4.tcp_no_ssthresh_metrics_save = 1
net.ipv4.tcp_notsent_lowat = 4294967295
net.ipv4.tcp_orphan_retries = 0
net.ipv4.tcp_pacing_ca_ratio = 120
net.ipv4.tcp_pacing_ss_ratio = 200
net.ipv4.tcp_probe_interval = 600
net.ipv4.tcp_probe_threshold = 8
net.ipv4.tcp_recovery = 1
net.ipv4.tcp_reflect_tos = 0
net.ipv4.tcp_reordering = 3
net.ipv4.tcp_retrans_collapse = 1
net.ipv4.tcp_retries1 = 3
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_rfc1337 = 0
net.ipv4.tcp_rmem = 4096        131072  6291456
net.ipv4.tcp_rx_skb_cache = 0
net.ipv4.tcp_sack = 1
net.ipv4.tcp_slow_start_after_idle = 1
net.ipv4.tcp_stdurg = 0
net.ipv4.tcp_syn_retries = 6
net.ipv4.tcp_synack_retries = 5
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_thin_linear_timeouts = 0
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_tso_win_divisor = 3
net.ipv4.tcp_tw_reuse = 2
net.ipv4.tcp_tx_skb_cache = 0
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_wmem = 4096        16384   4194304
net.ipv4.tcp_workaround_signed_windows = 0

Also I'm using Virtio for all my VMs, but the problem remain on the proxmox host, so it's not related I think.
Proxmox v7
BIOS Updated
 
Last edited:
It's been over a year, yet no resolution. I wonder if staffs are looking at this and working towards a solution.
 
I had this same issue, I believe it was caused by something like this (I have the Z690):
https://www.reddit.com/r/intel/comments/hqtu3h/psa_do_not_buy_any_z490_boards_with_the_intel/

I fiddled with the NIC settings as suggested in that thread, but it was turning into a time-suck. I ultimately resolved it by plugging in a USB-C Ethernet adapter. I'll probably buy a PCI card eventually.

I don't love it as a solution but I had already wasted enough time on it.
 
*** Solution ***
* In my Case solution was change the nic from re0 (Realtek) to em0 (Intel)-
* 1. i build in a intel chipset on PCIe x1
* 2. PCI Pass through "Add" for Intel
* 3. PCI Pass Through "Remove" onboard RTL chipset. (WAN)
* 4. Started Opnsense Switcheched / Activated the Intel as WAN (em0)
* 5. Additionally i read Processor"host" is better for OPNSense to use all instructions (I did it, but not checked difference between KVM64+FLAG * AES and HOST+ FLAG AES)
* 6. Reboot whole PVE (was necessary)
* Single speed 500 / Multi Speed 500 - Tip: For unknown reasons I could run only PCIE Passthrough network card.
* This setup with RTLwas kinda running under PVE7 - so no clue what changed. - I am happy now with Intel on LAN / WAN
*************

Here to read was the issue was.

Whoever is really interested in solving this case please let's share information about
- PVE Version
- Router Firewall
- Net & NICS

So i do a Start.
No Problem with :
HW: Intel 3thGeneration 2 Core / 2 HT , 8 GB RAM
PVE: 6x, 7x
Router FW : Opnsense 22 (as VM)
VM: Kvm64 + AES
Net and NIcs:
WAN : rtl8169 compatible (PCIE Passthrough)
LAN : vmbr0 /e1000 - Intel E1000 compatible chipset binded on Bridge
VLAN1 : vmbr1

Speed Internet: 250Mbit Single-Download / 250Mbit Multi-Download (old line)

Problem with :

HW: Intel 8th Generation 6 Core / 0 HT, 8GB RAM
PVE:: 8x
Router FW : Opnsense 23 (as VM)
VM: Kvm64 + AES
Net and NIcs:
WAN : rtl8169 compatible (PCIE Passthrough)
LAN : vmbr0 /e1000 - Intel E1000 compatible chipset binded on Bridge
VLAN1 : vmbr1

Issues:
Wget: From HW Client Ubuntu 23.04 : Single Download http <=200 Mbit/s, Speedtest: 500Mbit/s Multi Download
Wget: From LXC / VM Client / PVE Host : Single Download <= 200Mbit/s , Speedtest LXC / VM / HW Client / PVE : 40MBit/s , 60Mbit/s-100 Mbit/s

iperf3 HW Client <-> PVE = 1Gbit/s
iperf3 HW Client <-> Opnsense = 1GBit/s
Iperf3 LXC Client <-> Opnsense = 2.1 GBit/s
Iperf3 LXC Client <-> PVE = 45.Gbit/s
Iperf3 PVE <-> Opnsense = 2.1 GBit
Iperf3 HW Client <-> LXC = 1 GBit

Iperf3 -R Opnsense <-> different Public Iperf3 Servers 500Mbit/s

This german speedtest: 80Mbit https://www.telekom.de/netz/speedtest
This speedtest single line: 40Mbit https://www.speedtest.net
The same speedtest MMultiline 500Mbit

Last test: Random downloads from HW Client, PVE - Arch, Debian, Ubuntu Isos: Between 40Mbit and 200Mbit
 
Last edited:
Hello, same problem here with proxmox 8, the node and vms with very low speeds, I formatted the server with debian 11 and the speed returned to normal.
 
Same problem here on Proxmox 7 and 8. No problems with networks speed on Ubuntu install on same device.

1. make and model of the NIC. if you're using a self provided driver, note that as well.
Intel I219-V NIC, default driver

2. /etc/network/interfaces

Code:
auto lo
iface lo inet loopback

iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.250/24
        gateway 192.168.1.1
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0

iface wlp0s20f3 inet manual

3. any sysctl tuning (or lack thereof)
NO

4. guest NIC type, OS version, and guest driver version (especially if virtio)
Slow download speeds on host ~ 50Mbps. Upload speed is good ~ 500Mbps. iPerf is 1good ~ 1000 Mbps
 
There appear to be two solutions for this problem (on the I219-V at least), either disable "aspm" in the bios of the device, or add "intel_idle.max_cstate=1" to the "GRUB_CMDLINE_LINUX_DEFAULT" in your grub config, update grub and reboot. I do not know if the later solution has any unexpected side effects.

-Tim
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!