[SOLVED] 1 Gbps Limit VM to/from Host (RT8125)

j0nspm

New Member
Jan 2, 2025
7
0
1
Hi all,

I've spent days trying to figure this out to no avail.

Quick details, NIC is a RT8125

IPERF3 from my host to another machine on the same switch can hit 2.5 Gbps no problem.

Going from a VM on the host to the host, I'm limited to 1 Gbps. I've tried adjusting MTU's to 9000 on all my network items, no change. Tried changing CPU to HOST on VM, no change. My main ethernet detects 2500baseT/Full no problem, my bridge has 10000MB/s no problem (visible at least).

04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller (rev 05)

Settings for enp4s0:
Supported ports: [ TP MII ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
2500baseT/Full
Supported pause frame use: Symmetric Receive-only
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
2500baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Link partner advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
2500baseT/Full
Link partner advertised pause frame use: Symmetric Receive-only
Link partner advertised auto-negotiation: Yes
Link partner advertised FEC modes: Not reported
Speed: 2500Mb/s
Duplex: Full
Auto-negotiation: on
master-slave cfg: preferred slave
master-slave status: slave
Port: Twisted Pair
PHYAD: 0
Transceiver: external
MDI-X: Unknown
Supports Wake-on: pumbg
Wake-on: d
Link detected: yes

Settings for vmbr0:
Supported ports: [ ]
Supported link modes: Not reported
Supported pause frame use: No
Supports auto-negotiation: No
Supported FEC modes: Not reported
Advertised link modes: Not reported
Advertised pause frame use: No
Advertised auto-negotiation: No
Advertised FEC modes: Not reported
Speed: 10000Mb/s
Duplex: Unknown! (255)
Auto-negotiation: off
Port: Other
PHYAD: 0
Transceiver: internal
Link detected: yes

IPERF3 from my host to a Windows box on the same switch:

Connecting to host 10.0.0.188, port 5201
[ 5] local 10.0.0.21 port 44012 connected to 10.0.0.188 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 282 MBytes 2.37 Gbits/sec 0 267 KBytes
[ 5] 1.00-2.00 sec 280 MBytes 2.35 Gbits/sec 0 267 KBytes
[ 5] 2.00-3.00 sec 282 MBytes 2.37 Gbits/sec 0 535 KBytes
[ 5] 3.00-4.00 sec 281 MBytes 2.36 Gbits/sec 0 535 KBytes
[ 5] 4.00-5.00 sec 280 MBytes 2.35 Gbits/sec 0 535 KBytes
[ 5] 5.00-6.00 sec 281 MBytes 2.35 Gbits/sec 0 535 KBytes
[ 5] 6.00-7.00 sec 281 MBytes 2.36 Gbits/sec 0 535 KBytes
[ 5] 7.00-8.00 sec 281 MBytes 2.36 Gbits/sec 0 535 KBytes
[ 5] 8.00-9.00 sec 280 MBytes 2.35 Gbits/sec 0 535 KBytes
[ 5] 9.00-10.00 sec 281 MBytes 2.35 Gbits/sec 0 535 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 2.74 GBytes 2.36 Gbits/sec 0 sender
[ 5] 0.00-10.00 sec 2.74 GBytes 2.35 Gbits/sec receiver


IPERF3 from a VM to the host (and vice versa):

Connecting to host 10.0.0.21, port 5201
[ 5] local 10.1.0.109 port 40430 connected to 10.0.0.21 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 112 MBytes 943 Mbits/sec 158 962 KBytes
[ 5] 1.00-2.00 sec 111 MBytes 935 Mbits/sec 0 1.02 MBytes
[ 5] 2.00-3.00 sec 110 MBytes 923 Mbits/sec 0 1.09 MBytes
[ 5] 3.00-4.00 sec 111 MBytes 929 Mbits/sec 0 1.15 MBytes
[ 5] 4.00-5.00 sec 112 MBytes 938 Mbits/sec 26 915 KBytes
[ 5] 5.00-6.00 sec 110 MBytes 924 Mbits/sec 0 1004 KBytes
[ 5] 6.00-7.00 sec 110 MBytes 921 Mbits/sec 0 1.06 MBytes
[ 5] 7.00-8.00 sec 111 MBytes 931 Mbits/sec 0 1.14 MBytes
[ 5] 8.00-9.00 sec 112 MBytes 940 Mbits/sec 35 898 KBytes
[ 5] 9.00-10.00 sec 111 MBytes 929 Mbits/sec 0 996 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 1.08 GBytes 931 Mbits/sec 219 sender
[ 5] 0.00-10.01 sec 1.08 GBytes 929 Mbits/sec receiver

Pretty much out of ideas, all my VMs are like this on the host.

Any help would be greatly appreciated, I've found a bunch of threads but no real answers for onboard 2.5 Gbps. Either the network driver is wrong and 2500baseT/Full isn't detected, or people adding secondary cards.
 
Hello j0nspm! Could you please post the VM configuration of one of the VMs that are limited to 1 Gbps (the output of qm config <VMID>)?

I'm thinking that the wrong NIC might be configured - please check out our documentation on Network Devices. For example, the Intel E1000 emulates a 1 Gbps NIC. We recommend using the VirtIO paravirtualized NIC for the best performance. Of course, for Windows VMs, this means that you'll need to install the VirtIO drivers - please check the wiki page about the Windows VirtIO Drivers.
 
Hello j0nspm! Could you please post the VM configuration of one of the VMs that are limited to 1 Gbps (the output of qm config <VMID>)?

I'm thinking that the wrong NIC might be configured - please check out our documentation on Network Devices. For example, the Intel E1000 emulates a 1 Gbps NIC. We recommend using the VirtIO paravirtualized NIC for the best performance. Of course, for Windows VMs, this means that you'll need to install the VirtIO drivers - please check the wiki page about the Windows VirtIO Drivers.
I am using VirtIO for all my Windows and Linux drivers.

Here's one of the VMs I'm working on currently:

1747240350845.png

1747240461054.png

Here's a Windows VM:

1747240506546.png

1747240754887.png

1747240774670.png

Here is my host networking:

1747241052390.png
 
Last edited:
Thanks for the info. The paravirtualized VirtIO network driver should expose the full speed, even if it only reports 1000 Mbps. Just wondering, does it help if you use Multiqueue? Please note the additional steps required to use Multiqueue in Windows guests.
 
  • Like
Reactions: Johannes S
IPERF3 from a VM to the host (and vice versa):

Connecting to host 10.0.0.21, port 5201
[ 5] local 10.1.0.109 port 40430 connected to 10.0.0.21 port 5201
...
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 1.08 GBytes 931 Mbits/sec 219 sender
[ 5] 0.00-10.01 sec 1.08 GBytes 929 Mbits/sec receiver
Note that your VM is 10.1.0.109, which is not in the same 10.0.0.0/24 subnet that is shown in your host vmbr0.1 configuration.

What is the router between the 10.0.0.0/24 and (assumed) 10.1.0.0/24 networks?
 
Thanks for the info. The paravirtualized VirtIO network driver should expose the full speed, even if it only reports 1000 Mbps. Just wondering, does it help if you use Multiqueue? Please note the additional steps required to use Multiqueue in Windows guests.

I will test and follow up to see.

host ip is in the same subnet and same vlan than the vm ?
Host is on 10.0.0.0 vlan 0, vm is on bridge vlan 101, but I moved it out of vlan 101, no change.

Note that your VM is 10.1.0.109, which is not in the same 10.0.0.0/24 subnet that is shown in your host vmbr0.1 configuration.

What is the router between the 10.0.0.0/24 and (assumed) 10.1.0.0/24 networks?

Good eye! The 10.1.0.109 is in VLAN 101, it lives on the proxmox host with the IP of 10.0.0.21, the switch it goes to is a QNAP QSW-2104-2T-A-US, which is hooked up to a 2.5 Gbps port on this switch. Connecting from the VM (10.1.0.109) to another machine (my PC) on the same switch (which is a Windows machine, 2.5 Gbps NIC), I still get 1 Gbps. I can go from the host (10.0.0.21) to my PC (10.0.0.188) and I get 2.5 Gbps (first part of the original post).

So the switch doesn't seem to be the problem. I moved it to a 10 Gbps port for giggles and I still don't see 2.5 Gbp from the VM to my PC or to the host it lives on.
 
Good eye! The 10.1.0.109 is in VLAN 101, it lives on the proxmox host with the IP of 10.0.0.21, the switch it goes to is a QNAP QSW-2104-2T-A-US, which is hooked up to a 2.5 Gbps port on this switch. Connecting from the VM (10.1.0.109) to another machine (my PC) on the same switch (which is a Windows machine, 2.5 Gbps NIC), I still get 1 Gbps. I can go from the host (10.0.0.21) to my PC (10.0.0.188) and I get 2.5 Gbps (first part of the original post).

So the switch doesn't seem to be the problem. I moved it to a 10 Gbps port for giggles and I still don't see 2.5 Gbp from the VM to my PC or to the host it lives on.

Based on quick search, QSW-2104-2T is an unmanaged L2 switch, so it is not the router.

So the question remains: what is the router between the 10.0.0.0/24 and (assumed) 10.1.0.0/24 subnets, and further, why/how is it limiting the bandwidth between the subnets? Maybe check with the network admins?
 
Based on quick search, QSW-2104-2T is an unmanaged L2 switch, so it is not the router.

So the question remains: what is the router between the 10.0.0.0/24 and (assumed) 10.1.0.0/24 subnets, and further, why/how is it limiting the bandwidth between the subnets? Maybe check with the network admins?

Apologies, the reason why I replied with the switch as I've taken the router out of the equation by unplugging it and hard coding IPs thus bypassing any router requirements.

That said, router is a Ubiquiti UDM Pro coming in from this switch to the UDM Pro over 10 Gbps. So, I have 2.5 Gbps from the host to switch, switch is 10 Gbps from switch to router.

No networking limiting, I am the network admin.

The VM itself, on the same proxmox host, cannot talk to the host at 2.5 Gbps (iperf3 indicates 1 Gbps from the VM to the local host), so it's not leaving the proxmox ecosystem at anything else other than 1 Gbps. This happens to all the VMs regardless of Windows vs Linux.

Thanks for the info. The paravirtualized VirtIO network driver should expose the full speed, even if it only reports 1000 Mbps. Just wondering, does it help if you use Multiqueue? Please note the additional steps required to use Multiqueue in Windows guests.

Following up, I set multiqueue to 2 on the linux VM I'm testing with as its assigned 2 cores (1 socket), no difference.

I'm going to flatten the network again in Proxmox and see what I can discover.
 
Quick update.

So when I just move the VM from VLAN 101 to VLAN 1 to match the management VLAN and hard code the IP on the VM, I now reach VM to host speeds I'd expect:

Server listening on 5201 (test #5)
-----------------------------------------------------------
Accepted connection from 10.0.0.114, port 34344
[ 5] local 10.0.0.21 port 5201 connected to 10.0.0.114 port 34346
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 6.13 GBytes 52.6 Gbits/sec
[ 5] 1.00-2.00 sec 5.23 GBytes 44.9 Gbits/sec
[ 5] 2.00-3.00 sec 5.45 GBytes 46.8 Gbits/sec
[ 5] 3.00-4.00 sec 5.79 GBytes 49.8 Gbits/sec
[ 5] 4.00-5.00 sec 5.65 GBytes 48.5 Gbits/sec
[ 5] 5.00-6.00 sec 6.09 GBytes 52.3 Gbits/sec
[ 5] 6.00-7.00 sec 5.60 GBytes 48.1 Gbits/sec
[ 5] 7.00-8.00 sec 5.68 GBytes 48.8 Gbits/sec
[ 5] 8.00-9.00 sec 5.86 GBytes 50.3 Gbits/sec
[ 5] 9.00-10.00 sec 5.77 GBytes 49.6 Gbits/sec
[ 5] 10.00-10.00 sec 7.21 MBytes 44.6 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.00 sec 57.3 GBytes 49.2 Gbits/sec receiver

So it appears as soon as I switch the vlan on the VM from 1 to 101, it immediately drops to 1 Gbps

-----------------------------------------------------------
Server listening on 5201 (test #6)
-----------------------------------------------------------
Accepted connection from 10.1.0.109, port 45766
[ 5] local 10.0.0.21 port 5201 connected to 10.1.0.109 port 45772
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 106 MBytes 885 Mbits/sec
[ 5] 1.00-2.00 sec 78.1 MBytes 655 Mbits/sec
[ 5] 2.00-3.00 sec 86.9 MBytes 729 Mbits/sec
[ 5] 3.00-4.00 sec 111 MBytes 929 Mbits/sec
[ 5] 4.00-5.00 sec 110 MBytes 923 Mbits/sec
[ 5] 4.00-5.00 sec 110 MBytes 923 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-5.00 sec 494 MBytes 829 Mbits/sec receiver
iperf3: the client has terminated


No changes on the hardware networking side. I even unplugged the network from the host and locally went in and tested from the host to the VM, same issue with no network, vlan 101 is 1 Gbps, vlan 1 is 50+/- Gbps.

Looking at the proxmox networking, I can't find any reason why specifying a vlan would drop the bandwidth down?

1747410584870.png

1747410589753.png

1747410594731.png
 

Attachments

  • 1747410291367.png
    1747410291367.png
    18.8 KB · Views: 1
  • 1747410235828.png
    1747410235828.png
    27.8 KB · Views: 1
The thing is, your vmbr0 is just a switch, not a router. It does not route packets between VLANs 1 and 101.

What is the default gateway address of your 10.1.0.109 VM? On which device is that IP address configured?
 
The thing is, your vmbr0 is just a switch, not a router. It does not route packets between VLANs 1 and 101.

What is the default gateway address of your 10.1.0.109 VM? On which device is that IP address configured?

I understand that.

Gateway is 10.1.0.1, this is configured on the Ubuntu VM within proxmox with a VM IP of 10.1.0.109. I have a 10.1.0.1 on my UDM Pro as a VLAN 101 so the network is aware of that gateway for communications.

I have an ESXi host on this same L2 QNAP switch, running DVS and VMs are going across at 2.5 Gbps just fine to my Windows PC in the VLAN 1 network and other machines on my network on both VLAN 101 and VLAN 1. The limit appears to be coming from proxmox. If you didn't catch my post/reply earlier, seems to occur on the proxmox side as soon as I move it in and out of VLAN 1.

This tells me the Proxmox host itself is configuring the vmbr0.101 and vmbr0.1 with some sort of network configuration causing the limit. No other hypervisors (I have ESXi and XenServer on other boxes all with a similar config) are experiencing this on the same switch to router to network configuration/network flow. All have similar configurations, VLAN 1 for management, VLAN 101 for Lab network.

I've been digging around the forum and found posts here as well as reddit with similar issues but no resolutions.
 
Last edited:
I even unplugged the network from the host and locally went in and tested from the host to the VM, same issue with no network, vlan 101 is 1 Gbps
and
Gateway is 10.1.0.1, this is configured on the Ubuntu VM within proxmox with a VM IP of 10.1.0.109. I have a 10.1.0.1 on my UDM Pro as a VLAN 101 so the network is aware of that gateway for communications.
What's your best idea how the traffic was routed between the VLANs 1 and 101 when you had your physical network disconnected from the PVE host (= UDM Pro 10.1.0.1 was not reachable from the VM in VLAN 101)?
 
Maybe check the ARP table on 10.1.0.109 to see the MAC address of 10.1.0.1 to figure out what is the component routing between the VLANs.
 
Last edited:
Checked ARP, can see which interface (0/9 is coming through on my UDM Pro and validated its correct port coming in from VLAN 101 'learned'. This box is 1 port to L2 switch, then L2 to UDM Pro (router) on 0/9.

VLAN ID MAC Address Interface IfIndex Status
------- ------------------ --------------------- ------- ------------
101 BC:24:11:F9:62:EB 0/9 1 Learned

1747414695652.png

On a whim I just blewout the /etc/network/interfaces file (here is original config).

1747414505118.png

Went back in and reconfigured the vlan 1 again so I could get into it (for management), resetup vmbr0.101, saved everything and rebooted again, and now I'm getting full speed.

1747414806980.png

No idea why.
 
Additional food for thoughts: Since your switch is unmanaged and does not support VLANs, there is no real separation between the VLANs you are using in the network. The switch will happily relay the frames between all hosts, either by flooding (unknown unicast, broadcast, multicast) or based on the known MAC addresses. The IP configurations on the hosts still naturally affect the way the hosts try to communicate with each other, but it is a weak separation when frame-level reachability exists anyway.