Low throughput from Windows PC to PVE

logan893

Member
Mar 23, 2022
29
0
6
Annoyingly weird throughput behavior between my Windows PC with a Realtek 8156B 2.5Gb NIC via USB, and two separate PVE hosts.

Not the best scenario, to be fair, but I am able to reach 2.3 Gbps to some VMs but not the PVE hosts.

Windows PC: Windows 10, Realtek 8156B, cat6 to switch
PVE Host 1: PVE 8.1.3, with an Intel X550, cat6 to switch and RJ45-to-SFP+ module for conversion
PVE Host 2: PVE 9.0.3, with a Mellanox ConnectX-2, DAC to switch
Switch: 2-port SFP+ and 8-port 2.5G RJ45

Systems are similar in terms of performance. PVE 1 is based on a Xeon E3-1220 v6, and PVE 2 is based on a Xeon E-2246G. Base load is higher for PVE 1, but typically less than 10%, while PVE 2 is a new system and only has the one Ubuntu VM currently running.

I am having slow uploads to both PVE hosts, especially host 1. During ISO uploads to PVE host 1 (using various browsers) I can reach only around 500-700 Mbps (sometimes as low as 300 Mbps) and to PVE host 2 I reach just below 1 Gbps.

I have confirmed these speeds using iperf3 in TCP mode and they are essentially the same or even lower. However, iperf3 in reverse mode (PVE hosts sending to Windows PC) can reach over 2 Gbps.

Where things get weird, however, is when testing with iperf3 between the Windows PC and an Ubuntu Desktop 24.04 VM on PVE host 2. This reaches over 2 Gbps in either direction.

Between PVE host 1 and 2, they reach line rate of 9.4 Gbps.

Win10 --> PVE 1: 300-700 Mbps
Win10 <-- PVE 1: 2.3 Gbps

Win10 --> Ubuntu 25.10 @ PVE 1: 250-400 Mbps
Win10 <-- Ubuntu 25.10 @ PVE 1: 2.1 Gbps

Win10 -> PVE 2: 950 Mbps
Win10 <-- PVE 2: 2.3 Gbps

Win10 --> Ubuntu 24.04 @ PVE 2: 2.3 Gbps
Win10 <-- Ubuntu 24.04 @ PVE 2: 2.3 Gbps

PVE 1 --> PVE 2: 9.4 Gbps (both host-to-host and between Ubuntu VMs)
PVE 1 <-- PVE 2: 9.4 Gbps (both host-to-host and between Ubuntu VMs)

I am especially concerned about the TCP transfer speed to PVE host 1, since this seems universal for transfers also to VMs.

Any ideas what's going on with PVE host 1, and how to improve these transfer speeds?

And, for PVE host 2, since I can reach 2.3 Gbps upload to a VM, is it reasonable to expect and achieve the same speeds to the host itself?
 
Hello,
The slow speed may be due to Windows and VirtIO NIC settings.
Please try setting MultiQueue in the Windows server NIC settings to match the number of CPU cores, or half of them, and then check the network speed.
For Linux-based systems, no additional configuration is usually needed as parallel processing is already built in.
1761008203206.png
"In the case of the Windows servers I tested, the speed increased from 2–3 Gbps to over 8 Gbps after enabling Multi-Queue."
 
The Windows 10 PC is a physical machine, not virtualized, with poor transfer speeds to the PVE hosts. I am testing single stream throughput via iperf3, to mimic my typical use case.

With multiple parallel transfers in iperf3 from the Windows 10 PC to PVE hosts, I have inconsistent behavior. I still only reach 950 Mbps total to PVE Host 2, as if it were caped at 1 Gbps physical speeds. To PVE Host 1 I get higher speeds as I increase the number of streams, and with 5 or more parallel streams it can reach over 2 Gbps total.

In PVE Host 1, two interfaces (one 1Gb + one 10Gb) are combined in a single Linux bridge, but the only network path from my Windows 10 PC to this host is via its 10Gb interface.

Focusing on the PVE Host 2 configuration for now:
I have two Linux bridges, each linked to a separate NIC and configured with unique IPs.
vmbr0 = 1Gb onboard Intel i210
vmbr1 = 10Gb Mellanox ConnectX-2

iperf3 runs in server mode, listening to all interfaces. Windows PC NIC has IPs on both networks used, assigned locally, and there should be no need to route anything.

With both NICs connected, I am limited to 1Gb speeds from the Windows 10 PC to PVE Host 2, regardless of which PVE Host 2 IP I am using as my destination in iperf3.

If I disconnect the 1Gb physical link, I can no longer reach the vmbr0 IP, but I can now achieve 2.3 Gbps speeds to vmbr1 IP.

Reconnecting vmbr0 link, and now performance is behaving as I would expect. Still able to achieve 2.3 Gbps to vmbr1 IP, and 950 Mbps to vmbr0 IP.

I then reboot the server... And, I am now limited to 950 Mbps again, to both vmbr0 and vmbr1.

It seems as though the traffic from my Windows 10 PC towards the PVE host 2 vmbr1 IP, is being routed via the PVE host 1 bridge and sent to PVE host 2 vmbr0 interface (which is connected to a different switch). Because when I disconnect the PVE host 1 10G link to the switch mentioned above, I am at first unable to reach the PVE host 2 vmbr1 IP even though it should be local to the switch, until after a few seconds it starts working again, and now transfer speeds reach 2.3 Gbps.

And "worse" yet, I am now also able to reach over 2 Gbps towards the vmbr0 IP, which is configured on a 1 Gbps physical interface. Clearly, proxmox does not isolate the IPs configured to its Linux bridges.

Why is proxmox not isolating its IPs to the vmbr and interface to which they are assigned? Is this expected out-of-the-box? Can it be configured to honor using the interface to which it is assigned?
 
Last edited:
Does this internal cross-contamination of traffic somehow relate to the bridge config?

I would not expect the IP assigned to vmbr1 to be reachable via vmbr0 by default, but that is obviously what is happening.

d0:50:99:d3:10:78 = 1Gb NIC eno0, assigned to vmbr0
00:02:c9:53:fe:a4 = 10Gb NIC enp2s0, assigned to vmbr1

code_language.shell:
2: eno0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether d0:50:99:d3:10:78 brd ff:ff:ff:ff:ff:ff
    altname enxd05099d31078

4: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr1 state UP group default qlen 1000
    link/ether 00:02:c9:53:fe:a4 brd ff:ff:ff:ff:ff:ff
    altname enx0002c953fea4

(eno1 is physically disconnected)
code_language.shell:
auto vmbr0
iface vmbr0 inet static
        address x.x.x.x/24
        gateway x.x.x.1
        bridge-ports eno0 eno1
        bridge-stp off
        bridge-fd 0

auto vmbr1
iface vmbr1 inet static
        address y.y.y.y/24
        bridge-ports enp2s0
        bridge-stp off
        bridge-fd 0

code_language.shell:
root@pve2:~# bridge fdb show | grep "00:02:c9:53:fe:a4"
00:02:c9:53:fe:a4 dev eno0 master vmbr0
00:02:c9:53:fe:a4 dev eno0 vlan 1 master vmbr0
00:02:c9:53:fe:a4 dev enp2s0 vlan 1 master vmbr1 permanent
00:02:c9:53:fe:a4 dev enp2s0 master vmbr1 permanent
root@pve2:~# bridge fdb show | grep "d0:50:99:d3:10:78"
d0:50:99:d3:10:78 dev eno0 vlan 1 master vmbr0 permanent
d0:50:99:d3:10:78 dev eno0 master vmbr0 permanent
d0:50:99:d3:10:78 dev enp2s0 vlan 1 master vmbr1
d0:50:99:d3:10:78 dev enp2s0 master vmbr1
root@pve2:~#
 
Last edited: