[SOLVED] How to reduce latency of internal bridge connection between two VMs

piddy

Member
Feb 13, 2021
14
3
8
45
I have two VMs on the same internal bridge (post about it here); and at the moment when I ping one vm from the other, the latency is in the 100s of ms. I was really surprised by this (the switch is not physical, and communication between the VMs is happening on the same physical computer, so I expected very low latency). Is there a way for me to improve/reduce the latency?

System:
Motherboard: ROMED8-2T
Processor: EPYC 7262
RAM: DDR4, 2400 MT/s RDIMM (64GB)
Boot disk: zpool of 2 x 128GB NVMe m.2 SSDs

The two VMs are:
VM100 - TrueNAS Scale
VM110 - debian server (going to be used to run/manage docker containers

VM100 has one of the ROMED8-2T 10GbE NICs passed through to it. This is set up on a different subnet (192.168.X.X/24). It is also connected to my internal bridge ('media').
VM110 has two bridges vmbr0 (for access to the other subnets on my LAN as well as the WAN via my router); and the 'media' internal bridge.

Any help would be gratefully accepted.

Best wishes,

Philip
 
Apologies - I'd misread the output of ping. The latency is incredibly low... below 1ms. When I read the output, I missed the leading '0.' and thought I was reading latencies of hundreds of ms, not hundreds of microseconds.:rolleyes:
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!