inter-VM traffic on same node

jayg30

Member
Nov 8, 2017
50
4
13
38
I'm curious, does Proxmox have default mechanisms that route traffic between VMs and containers that reside on the same host without the traffic leaving the host to a physical switch? Basically a virtual switch. And if so are there limitations on the speed?

Doing some quick iperf3 tests seems to indicate it does as I don't see traffic leaving the host. But it seems to be limited to 1Gbits/sec.

The only interface for these VMs is the linux bridges, which is attached to a bond0, which is attached to 4 x 1GbE NICs (LACP).

If I created a bridge with no physical NIC attached, add it to both VMs, and manually set IP addresses so they communicated, would I get greater speeds?
 
Last edited:
I'm curious, does Proxmox have default mechanisms that route traffic between VMs and containers that reside on the same host without the traffic leaving the host to a physical switch?

Yes, we use a linux bridge device.

Basically a virtual switch. And if so are their limitations on the speed?

Not really (only CPU/Menory bandwidth).
 
which interface type did you use for your VMs?
E1000? or VirtIO?

VirtIO

CPU and RAM usage didn't spike at all in the host or VMs when I ran iperf3. It's effortlessly hitting 1Gbits/sec so I figured it had to be a limit of the bridge. I read somewhere on these forums (years ago) that the interface was limited by the speed of the physical NIC it's attached to even if the traffic is going between the VMs like this, but obviously that could have changed over the years.
 
Not really (only CPU/Menory bandwidth).

In my test case both Linux KVM VMs and the proxmox host have tons of free CPU and RAM while running iperf3. And the bandwidth is as close to exactly 1Gbit/sec that it clearly looks like a limit is being imposed somewhere. So I'm a bit at a lose as to why I'm not seeing over 1Gbit/sec if this is the case.

As I mentioned above, the VMs interfaces are using the Linux bridge which is tied a Linux bond of 4 physical 1GbE NICs. I had read somewhere on here a post, from one of the proxmox developers, from years ago that implied it's limited to the physical NIC speed even if the traffic stays on the host. They mentioned trying to add a 2nd bridge not attached to any physical NIC and manually setting IPs in the VMs for communication. That could easily have changed over the years though. It would be nice if I didn't have to do that to get speeds over 1Gbit/sec.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!