NIC Change - Poor Performance and Unexpected Config

greengolftee87

New Member
Feb 6, 2025
14
0
1
For the moment Ill hold off posting any configs because at this point I'm assuming I'm missing something obvious but maybe not.
In the quest for more speed I upgraded my box from 1g to 10g via pcie card. It shows up in the network config and all VM's are now attached to it no problem.
The issue I'm seeing is performance. File transfers were unexpectedly slow so I got out iperf. WVM = Windows 10 VM. LVM = Linux VM. Here's what I'm seeing
LVM to itself - 50 to 70 Gbits/s
WVM to itself - 2.1 Gbits/s
WVM to WVM - 2.75 Gbits/s
LVM to LVM - 2.3 Gbits/s
WVM to another pc on 10gig - 600 Mbit/s

So, something is for sure wrong here, everything is on VirtIO and the windows machines have the driver installed and report a 10g connection.
vmbr1 is the new bridge to enp5s0
auto lo
iface lo inet loopback

iface enp4s0 inet manual

auto enp5s0
iface enp5s0 inet manual

auto vmbr0
iface vmbr0 inet static
address 192.168.4.199/24
gateway 192.168.4.1
bridge-ports enp4s0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094

auto vmbr1
iface vmbr1 inet static
address 192.168.4.200/24
gateway 192.168.4.1
bridge-ports enp5s0
bridge-stp off
bridge-fd 0

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
link/ether d8:43:ae:6c:bf:12 brd ff:ff:ff:ff:ff:ff
3: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr1 state UP group default qlen 1000
link/ether 98:b7:85:21:67:16 brd ff:ff:ff:ff:ff:ff
4: wlo1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether f8:fe:5e:ab:69:cc brd ff:ff:ff:ff:ff:ff
altname wlp0s20f3
5: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether d8:43:ae:6c:bf:12 brd ff:ff:ff:ff:ff:ff
inet 192.168.4.199/24 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::da43:aeff:fe6c:bf12/64 scope link
valid_lft forever preferred_lft forever
6: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 98:b7:85:21:67:16 brd ff:ff:ff:ff:ff:ff
inet 192.168.4.200/24 scope global vmbr1
valid_lft forever preferred_lft forever
inet6 fe80::9ab7:85ff:fe21:6716/64 scope link
valid_lft forever preferred_lft forever

2nd thing I noticed is I cant access proxmox VE over the new NIC, only the old one. Do I have to point the base OS towards the new NIC? This isnt critical, but I did make a note of it.
 
5: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
inet 192.168.4.199/24 scope global vmbr0

6: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
inet 192.168.4.200/24 scope global vmbr1
Do not do that. At least not with additional magic.

You can have aliases with multiple addresses on one NIC or bridge. But you should have only one NIC/bridge per network.

2nd thing I noticed is I cant access proxmox VE over the new NIC, only the old one.
Yes that's one of the effects with this setup. And there are weirder ones...
 
  • Like
Reactions: _gabriel and waltar
Do not do that. At least not with additional magic.

You can have aliases with multiple addresses on one NIC or bridge. But you should have only one NIC/bridge per network.


Yes that's one of the effects with this setup. And there are weirder ones...
I was afraid of losing access to the machine with an incorrect network config so I was trying to "walk" the change in. I ill split off one nic onto a different network.
 
Do not do that. At least not with additional magic.

You can have aliases with multiple addresses on one NIC or bridge. But you should have only one NIC/bridge per network.


Yes that's one of the effects with this setup. And there are weirder ones...
I moved the original NIC off to another network and saw internal improvements within the proxmox box but externally is still weak.

LVM to LVM - 40gbs
LVM to WVM - 10gbs
WVM to WVM - 7gbs
WVM to external windows PC - 1gbs
LVM to external windows PC - 1.4gbs

in reality I am getting faster speeds for file transfers than iperf shows but not by much. around 2gb/s
 
physical NIC isn't involved here.
Traffic pass and keep over the Linux bridge which speed is bounded to your host CPU.
I know, but I expect some consistency. Doesn't it seem strange the wildly varying rates? I wanted to start there before measuring over the network to make sure the problem wasn't inside the box.
 
how did you get the varying ?
Rates are tied about guest VM vCpu and host CPU usage.
On one particular windows VM. Just to see what would happen I used iperf against the loopback address and was only able to get 800 Mbit.
it has plenty of resources and the cpu utilization doesnt even move really. I'm going to fire up another win VM with a ton of power and check that across the bridge but im not optimistic.
1738882692525.png
 
check your Windows iperf3 version, because iperf3 was single cpu thread.
Only 3.16 version (since late 2023 irrc ) -P option enable multi CPU thread in addition of multi streams.