How to increase networking performance between 2 Linux VM's on the same host & bridge

iamspartacus

Member
Sep 9, 2020
53
5
13
41
I'm trying to increase the networking performance between 2 Linux VM's on the same Proxmox host and bridge. I've tried both a Linux bridge and OVS bridge. I've tried increasing MTU to 900o and multiqueue to 4. I can't seem to get above 25-30Gbps between the VMs. I'm running these VM's on an EPYC 7443p system and when I monitor the cpu during the testing, CPU is not even 25% utilized. I've tried testing with iperf (version 2), iperf3, NFS file transfers (very slow), and SMB file transfers (I can get up to 2.5GB/s).

I'm just not sure where the bottleneck is.
 
no, you should try split your single cpu into 4 numa nodes in the BIOS, then assign "cpu affinity" manually to each VM with output of numactl -H.

edit: btw, 30 Gb/s seems already fast.
 
Last edited:
no, you should try split your single cpu into 4 numa nodes in the BIOS, then assign "cpu affinity" manually to each VM with output of numactl -H.

edit: btw, 30 Gb/s seems already fast.

How will that affect VMs that use more CPUs than a single NUMA node has?

As for 30 Gb/s being fast. Sure, to some. But I'm trying to take advantage of some fast NVMe pools I have on another VM that are capable of 10GB/s.
 
Last edited: