Severe Download Speed Limitation on Newly Provisioned VMs and CTs (Proxmox 8.4.1)

lynix

Member
Oct 4, 2022
2
0
6
Detroit, Michigan
I’m currently running Proxmox VE 8.4.1 and recently started experiencing a major issue with network throughput—though I’m not sure exactly when it began.


Problem Summary:​


  • All newly provisioned VMs and containers (CTs) are seeing download speeds below 1 Mbps, regardless of OS or template used.
  • Upload speeds remain near 30 to 100Mbps
  • VM-to-VM traffic (on the same subnet and host) performs normally — reaching 15 Gbps as expected.
  • Existing/older VMs perform normally — I can stop and restart them, and they still show 10 Gbps+ in both directions via speedtest-cli.
  • iperf3 tests to my Arista 7280S switches (which I know have weak CPUs) show 100 Mbps, which is expected and consistent.

Network Overview:​


  • I’m using Intel 810-C QSFP+ 100G NICs, bonded via cross-connected MLAG.
  • No tc rate limiting in place.
  • No shaping or firewalling beyond basic iptables default policies.
  • Bonding is LACP, and interfaces report clean links with expected negotiated speeds.
  • Hypervisors and other networking infrastructure appear unaffected.

Observations:​


  • The problem appears to affect only new VMs/CTs provisioned after a certain point.
  • I can move these affected VMs between nodes or storage — the issue persists.
  • I’ve verified that:
    • CPU pinning is not involved
    • The same templates used to work previously
    • No custom QoS, tc, or traffic control is being applied

Request:​


Is there any known issue with recent Proxmox 8.4.x updates that could cause behavior like this? Or are there any configuration files or debug steps you’d recommend to trace what might be capping downstream throughput only on new guests?


Any insights would be appreciated.


Thanks,
Chris Ecklesdafer
 
Thanks @SteveITS — appreciate the suggestion.

We’ve tested with multiple VMs and containers using different IP addresses, VLANs, and nodes. The issue persists:

Download speed is capped around 1–2 Mbps, upload remains 200+ Mbps.

What We've Tried:
- Different IPs, MACs, VLAN tags, and bridges (tested on vmbr1, with and without VLAN 15 and 202)
- Disabled vhost in the VM config (vhost=off)
- Different nodes (tested across c1u1s1, c1u1s3)
- Fresh ISO builds, clones, and containers — all exhibit the same download cap
- Manually ran QEMU with the full command line — same performance
- Installed linux-modules-extra inside guest OS
- Verified TAP interfaces: both tap113i0 (good VM) and tap124i0 (bad VM) show state UNKNOWN
- Verified firewall is off (checked nft, iptables, and PVE-level firewall settings)
- No rate limiting or shaping in tc or host qdisc config
- Disabled GSO/GRO/TSO and other offload features — no impact

Observations:
- VM-to-VM iperf3 (same bridge) gives full performance (10+ Gbps)
- Older VMs (created before recent updates) have full speed
- Cloning a good VM results in poor download performance
- The issue only impacts newly created VMs and containers

Example Speedtest Output:
Idle Latency: 0.72 ms
Download: 1.79 Mbps
Upload: 234.19 Mbps

tc -s qdisc show dev tap124i0:
qdisc pfifo_fast 0: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
Sent 20775496 bytes 186093 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0

ethtool -k tap124i0 (excerpt):
rx-checksumming: off
tx-checksumming: off
generic-segmentation-offload: on
generic-receive-offload: on

Sample qm config for slow VM (qm config 124):
boot: order=scsi0;ide2;net0
cores: 8
cpu: x86-64-v2-AES
memory: 8196
net0: virtio=XX:XX:XX:XX:XX:XX,bridge=vmbr1,tag=202
scsi0: Det1-NVMe:vm-124-disk-0,iothread=1,size=32G

Conclusion:
This seems limited to newly created guests (VMs and CTs). Everything else is identical. No clear rate limiting, firewall, or bridge misconfig. Would appreciate any further suggestions — feels like something changed at the hypervisor or kernel level.
 
I've stumbled across this trying to debug a weird network problem.

Running the Ookla speedtest cli program on the host itself, I see:
Code:
$ ./speedtest  

   Speedtest by Ookla

      Server: Spintel - Sydney (id: 58437)
         ISP: Spintel
Idle Latency:     0.45 ms   (jitter: 0.09ms, low: 0.41ms, high: 0.54ms)
    Download:    13.20 Mbps (data used: 9.2 MB)                                                   
                  0.98 ms   (jitter: 14.88ms, low: 0.56ms, high: 213.28ms)
      Upload:    10.55 Mbps (data used: 11.3 MB)                                                   
                  3.91 ms   (jitter: 27.69ms, low: 0.65ms, high: 439.19ms)
 Packet Loss: Not available.
  Result URL: https://www.speedtest.net/result/c/4ee28dfd-4ac9-46a3-9001-c19a9fc87938

Yet when people download some files from VMs that this system hosts, they can get 20-50MB/sec download speeds. Yet I then go and download an ISO *to* that same VM from a fast host nearby, and get:
Code:
$ wget https://mirror.aarnet.edu.au/pub/fedora/linux/releases/42/KDE/x86_64/iso/Fedora-KDE-Desktop-Live-42-1.1.x86_64.iso
Saving 'Fedora-KDE-Desktop-Live-42-1.1.x86_64.iso'
Fedora-KDE-Desktop-L   0% [>                                             ]    1.54M  227.02KB/s
[Files: 0  Bytes: 0  [0 B/s] Redirects: 0  Todo]

We're kind scratching our heads as to what has happened....

I have the following kernels installed:
Code:
$ proxmox-boot-tool kernel list
Manually selected kernels:
None.

Automatically selected kernels:
6.14.0-2-pve
6.14.5-1-bpo12-pve
6.8.12-11-pve

I have booted with each of these kernels, and the problems are exactly the same.

It doesn't seem to matter if I'm downloading to the proxmox host itself, or a VM running on it.

My host networking is:
bond0 = eno1 + eno3 network, which is 1 x 10Gbit SFP+ card (currently no cable) + 1 x 1Gbe network via copper (connected)
vmbr0 = public IP + most other VMs in my /24.

I do use the proxmox firewall, but don't have any rate limits set anywhere.
 
Hmmmm - not sure - they mention specifically pci id 14e4:1752 - mine shows:

Code:
$ lspci -nnd ::200
01:00.0 Ethernet controller [0200]: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet [14e4:168a] (rev 10)
01:00.1 Ethernet controller [0200]: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet [14e4:168a] (rev 10)
01:00.2 Ethernet controller [0200]: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet [14e4:168a] (rev 10)
01:00.3 Ethernet controller [0200]: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet [14e4:168a] (rev 10)
04:00.0 Ethernet controller [0200]: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe [14e4:1657] (rev 01)
04:00.1 Ethernet controller [0200]: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe [14e4:1657] (rev 01)
04:00.2 Ethernet controller [0200]: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe [14e4:1657] (rev 01)
04:00.3 Ethernet controller [0200]: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe [14e4:1657] (rev 01)

However, I tried disabling it anyway:
Code:
#  ethtool --show-offload eno3 | grep generic-receive-offload:
generic-receive-offload: on
#  ethtool --offload  eno3 generic-receive-offload off
#  ethtool --show-offload bond0 | grep generic-receive-offload:
generic-receive-offload: on
#  ethtool --offload  bond0 generic-receive-offload off
#  ethtool --show-offload bond0 | grep generic-receive-offload:
generic-receive-offload: off

Speed via a wget doens't seem to be any better:
Code:
# wget -4 https://mirror.aarnet.edu.au/pub/fedora/linux/releases/42/KDE/x86_64/iso/Fedora-KDE-Desktop-Live-42-1.1.x86_64.iso
--2025-06-12 18:01:36--  https://mirror.aarnet.edu.au/pub/fedora/linux/releases/42/KDE/x86_64/iso/Fedora-KDE-Desktop-Live-42-1.1.x86_64.iso
Resolving mirror.aarnet.edu.au (mirror.aarnet.edu.au)... 202.158.214.106
Connecting to mirror.aarnet.edu.au (mirror.aarnet.edu.au)|202.158.214.106|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2844538880 (2.6G) [application/octet-stream]
Saving to: ‘Fedora-KDE-Desktop-Live-42-1.1.x86_64.iso’

Fedora-KDE-Desktop-Live-42-1.1.x86_6   9%[=====>                                                              ] 256.65M  5.59MB/s    eta 5m 43s

Interestingly, getting someone to pull the same file off my server, via IPv4 gets 100MB/sec, yet via IPv6, only about 500KB/sec.

Yet downloading *to* the server, will get sub-12MB/sec. As such, uploading data via IPv4 at least seems fine, downloading via IPv4 seems slow, and IPv6 is always under 10Mbit no matter what the direction.
 
PVE is my gateway - it has a WAN subnet (a public /29), and the VMs all hang out on vmbrX which isn't bound to any network adapter.
 
1) Check with iperf in both directions, would love to see/hear those
2) Check for local storage issues
3) Also check for packet loses and re-assembly in the one direction (especially as it'll be the bigger packet sizes that could be causing the problems)

You have made sure of the MTU sizes? How does the packet captures compare for both directions and both protocols?
Double check the fast vs slow VM configs.. and you have power cycled the older/faster VMs ?

If you full clone a "fast/old" VM, does that also run fast or slow?