Slow networking between VM's

ThinGlizzy

New Member
Dec 27, 2025
3
0
1
I've had this issue for a while and have not been able to narrow it down to a specific cause. I have two VM's, one Windows 11 and the other UnRaid, they share a 10GB Mellanox CX4 virtual nic. I can do a speed test and get above 1gbps on each machine, but between them I am unable to achieve over 1gbps. I don't think this is a VM configuration issue, due to the individual VM's working as expected. Any help would be appreciated. I'm not sure what logs would be worth including, so if you have any ideas I'm all ears.

Additional info: Host and both VM's share the same interface, host is also getting expected networking speed.
 
Welcome to the forum!

What do you mean by they "share a 10GB Mellanox CX4 virtual nic" ? Does your Proxmox-Server has a 10G Mellanox NIC installed which is beeing used by your PVE Instance and your VMs and so your VMs use (probably) a VirtIO NIC which uses said 10G Hardware NIC?

Helpful logs (please include each output in a seperate [CODE] [/CODE] tag):
  • pveversion
  • qm config <VMID> --current for both VMs
Are these VMs in the same l2 broadcast domain or does the traffic have to travel outside of the server?
 
network between VM use the default vmbr0 bridge (a virtual software switch) tied to CPU, NIC hardware is not used.
That statement is incorrect. This applies only if both VM reside in the same L2 Domain which is why i asked explicitly for this scenario.
 
  • Like
Reactions: _gabriel
Are you traversing VLANs or doing something else that is forcing your traffic through the router? Within the same VLAN and on the same VMBR in Proxmox, the network interface doesn't matter. The traffic will go as fast as your motherboard can handle, since none of the traffic needs to traverse the NIC or go out to the switch or router. If on the other hand the the VMs are on different VLANs, then whatever device is doing your routing is now the bottle neck.
 
Thanks for all the fast responses, as requested
Code:
pve-manager/9.1.2/9d436f37a0ac4172 (running kernel: 6.17.4-1-pve)


Code:
root@pve:~# qm config 100 --current
agent: 1
audio0: device=intel-hda,driver=none
balloon: 0
bios: ovmf
boot: order=ide0
cores: 6
cpu: host
description: ide1%3A PVE1_NVME_1TB_1%3Avm-100-disk-0,backup=0,size=500G,ssd=1
efidisk0: PVE1_ZFS_960GB:vm-100-disk-1,efitype=4m,pre-enrolled-keys=1,size=1M
hostpci0: 0000:08:00
ide0: PVE1_ZFS_960GB:vm-100-disk-0,size=227G,ssd=1
machine: pc-q35-10.1,viommu=virtio
memory: 8192
meta: creation-qemu=7.1.0,ctime=1673192477
name: Win11
net0: virtio=BC:24:11:6F:BA:6C,bridge=vmbr0,queues=3
numa: 0
onboot: 1
ostype: win11
scsihw: virtio-scsi-single
smbios1: uuid=814e3e86-df6a-4e0e-8caa-adcee7fee82d
sockets: 1
tags:
tpmstate0: PVE1_ZFS_960GB:vm-100-disk-2,size=4M,version=v2.0
vmgenid: 1d83e86d-ff52-439c-925d-85f0b752f229


Code:
root@pve:~# qm config 101 --current
allow-ksm: 0
balloon: 0
bios: ovmf
boot: order=usb0
cores: 10
cpu: host
efidisk0: PVE1_ZFS_960GB:vm-101-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
hostpci0: 0000:0c:00.0
hostpci1: 0000:0b:00.0
hostpci2: 0000:05:00.0
hostpci3: 0000:01:00.0
hotplug: usb
machine: q35
memory: 12288
meta: creation-qemu=10.0.2,ctime=1761318077
name: UnRaid
net0: virtio=BC:24:11:F7:C4:FE,bridge=vmbr0,queues=5
net1: virtio=BC:24:11:20:16:60,bridge=vmbr5
numa: 0
onboot: 1
ostype: l26
scsi0: PVE1_ZFS_960GB:vm-101-disk-1,iothread=1,size=60G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=c819e2ea-33a1-4dff-878b-3af09e2b5bc7
sockets: 1
tags: dmz-dockers
tpmstate0: PVE1_ZFS:vm-101-disk-1,size=4M,version=v2.0
usb0: host=1-12
vmgenid: 84d751b3-987b-4bcb-b8d7-164cbe1f1821

To clarify traffic is on the same bridge and on the same subnet. Assuming I'm CPU limited is there any way to speed this up? My CPU is a 12600k for reference. Additionally I may try placing the VM's on different vmbr's on the same NIC.
 
let's see the contents of your /etc/network/interfaces file please. Also the output of ip a for both VMs. Also can you share the iperf commands you used for the test?
 
?
+ missing IP
This VM has a separate connection for external access on a different VLAN
let's see the contents of your /etc/network/interfaces file please. Also the output of ip a for both VMs. Also can you share the iperf commands you used for the test?
Please see below, in the interest of being lazy I did not use iperf. I did use speedtest.net for a external speedtest, to verify my greater than 1gb connection (I was multi gig WAN) and I used an Open speed test docker, and verified on a separate system that it was able to exceed 1gbps.

Code:
Ethernet adapter Ethernet 4:

   Connection-specific DNS Suffix  . : domain.internal
   IPv4 Address. . . . . . . . . . . : 10.10.50.16
   Subnet Mask . . . . . . . . . . . : 255.255.255.0
   Default Gateway . . . . . . . . . : 10.10.50.1
Code:
5: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether bc:24:11:f7:c4:fe brd ff:ff:ff:ff:ff:ff
    inet 10.10.50.6/24 metric 1 scope global eth0
       valid_lft forever preferred_lft forever



Code:
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto nic1
iface nic1 inet manual

auto nic2
iface nic2 inet manual

auto nic0
iface nic0 inet manual

iface nic3 inet manual

iface wlo1 inet manual

auto vmbr0
iface vmbr0 inet static
        address 10.10.50.10/24
        gateway 10.10.50.1
        bridge-ports nic1
        bridge-stp off
        bridge-fd 0
#LAN, SFP10G

auto vmbr4
iface vmbr4 inet manual
        bridge-ports nic2
        bridge-stp off
        bridge-fd 0
#10G alt DMZ

auto vmbr5
iface vmbr5 inet manual
        bridge-ports nic0
        bridge-stp off
        bridge-fd 0
#DMZ (On board)
 
Traffic between those VMs should never leave the host and the Linux bridge should handle it entirely in memory. This should easily saturate 10Gbps+. The problem is likely your testing methodology. OpenSpeedTest is HTTP-based with significant overhead. The 1Gbps cap is suspiciously exactly GbE speed, which suggests something specific is throttling, but it's almost certainly not the virtio NICs or bridge.

What I would do next is test properly with iperf3:
-Install iperf3 on both VMs
- On UnRaid run "iperf3 -s"
- On Windows run "iperf3 -c 10.10.50.6 -P 4" (4 parallel streams)

Question 1: How is the OpenSpeedTest container networked in UnRaid? If it's using Docker's default bridge with NAT (port mapping), that adds overhead. Host networking mode would be faster.
Question 2: Check Windows virtio drivers - Do you have the latest VirtIO drivers from Fedora/Red Hat installed? Old drivers can cap performance.

My guess is the 1Gbps cap is either the OpenSpeedTest container's Docker networking configuration or just HTTP overhead. iperf3 will confirm whether the underlying network path is actually fast.
 
As long as OP is not testing multithreaded iperf between said VMs any further digging makes no sense, unfortunatly..