Slow networking between VM's

ThinGlizzy

New Member
Dec 27, 2025
2
0
1
I've had this issue for a while and have not been able to narrow it down to a specific cause. I have two VM's, one Windows 11 and the other UnRaid, they share a 10GB Mellanox CX4 virtual nic. I can do a speed test and get above 1gbps on each machine, but between them I am unable to achieve over 1gbps. I don't think this is a VM configuration issue, due to the individual VM's working as expected. Any help would be appreciated. I'm not sure what logs would be worth including, so if you have any ideas I'm all ears.

Additional info: Host and both VM's share the same interface, host is also getting expected networking speed.
 
Welcome to the forum!

What do you mean by they "share a 10GB Mellanox CX4 virtual nic" ? Does your Proxmox-Server has a 10G Mellanox NIC installed which is beeing used by your PVE Instance and your VMs and so your VMs use (probably) a VirtIO NIC which uses said 10G Hardware NIC?

Helpful logs (please include each output in a seperate [CODE] [/CODE] tag):
  • pveversion
  • qm config <VMID> --current for both VMs
Are these VMs in the same l2 broadcast domain or does the traffic have to travel outside of the server?
 
network between VM use the default vmbr0 bridge (a virtual software switch) tied to CPU, NIC hardware is not used.
That statement is incorrect. This applies only if both VM reside in the same L2 Domain which is why i asked explicitly for this scenario.
 
  • Like
Reactions: _gabriel
Are you traversing VLANs or doing something else that is forcing your traffic through the router? Within the same VLAN and on the same VMBR in Proxmox, the network interface doesn't matter. The traffic will go as fast as your motherboard can handle, since none of the traffic needs to traverse the NIC or go out to the switch or router. If on the other hand the the VMs are on different VLANs, then whatever device is doing your routing is now the bottle neck.
 
Thanks for all the fast responses, as requested
Code:
pve-manager/9.1.2/9d436f37a0ac4172 (running kernel: 6.17.4-1-pve)


Code:
root@pve:~# qm config 100 --current
agent: 1
audio0: device=intel-hda,driver=none
balloon: 0
bios: ovmf
boot: order=ide0
cores: 6
cpu: host
description: ide1%3A PVE1_NVME_1TB_1%3Avm-100-disk-0,backup=0,size=500G,ssd=1
efidisk0: PVE1_ZFS_960GB:vm-100-disk-1,efitype=4m,pre-enrolled-keys=1,size=1M
hostpci0: 0000:08:00
ide0: PVE1_ZFS_960GB:vm-100-disk-0,size=227G,ssd=1
machine: pc-q35-10.1,viommu=virtio
memory: 8192
meta: creation-qemu=7.1.0,ctime=1673192477
name: Win11
net0: virtio=BC:24:11:6F:BA:6C,bridge=vmbr0,queues=3
numa: 0
onboot: 1
ostype: win11
scsihw: virtio-scsi-single
smbios1: uuid=814e3e86-df6a-4e0e-8caa-adcee7fee82d
sockets: 1
tags:
tpmstate0: PVE1_ZFS_960GB:vm-100-disk-2,size=4M,version=v2.0
vmgenid: 1d83e86d-ff52-439c-925d-85f0b752f229


Code:
root@pve:~# qm config 101 --current
allow-ksm: 0
balloon: 0
bios: ovmf
boot: order=usb0
cores: 10
cpu: host
efidisk0: PVE1_ZFS_960GB:vm-101-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
hostpci0: 0000:0c:00.0
hostpci1: 0000:0b:00.0
hostpci2: 0000:05:00.0
hostpci3: 0000:01:00.0
hotplug: usb
machine: q35
memory: 12288
meta: creation-qemu=10.0.2,ctime=1761318077
name: UnRaid
net0: virtio=BC:24:11:F7:C4:FE,bridge=vmbr0,queues=5
net1: virtio=BC:24:11:20:16:60,bridge=vmbr5
numa: 0
onboot: 1
ostype: l26
scsi0: PVE1_ZFS_960GB:vm-101-disk-1,iothread=1,size=60G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=c819e2ea-33a1-4dff-878b-3af09e2b5bc7
sockets: 1
tags: dmz-dockers
tpmstate0: PVE1_ZFS:vm-101-disk-1,size=4M,version=v2.0
usb0: host=1-12
vmgenid: 84d751b3-987b-4bcb-b8d7-164cbe1f1821

To clarify traffic is on the same bridge and on the same subnet. Assuming I'm CPU limited is there any way to speed this up? My CPU is a 12600k for reference. Additionally I may try placing the VM's on different vmbr's on the same NIC.
 
let's see the contents of your /etc/network/interfaces file please. Also the output of ip a for both VMs. Also can you share the iperf commands you used for the test?