High latency between guest

dewangga

Member
May 2, 2020
16
2
8
35
Hello!

I am having issue with latency between guest vm. Attached my production topology using proxmox 5.4.3. The proxmox node itself handle routing, so we have 4 router in 1 node (3 in vm, 1 in hypervisor).

The problem is, VM B, receive all routes from client, and the node interfaces is 10G, but why the latency seems unstable, while the throughput only approx 300Mbps ? (attached using bmon)

Is there any throughput limitation for VM Communication? eth0, eth1, and eth2 using virtio-nic.

My pveversion:
proxmox-ve: 5.4-1 (running kernel: 4.15.18-12-pve)
pve-manager: 5.4-3 (running version: 5.4-3/0a6eaa62)
pve-kernel-4.15: 5.3-3
pve-kernel-4.15.18-12-pve: 4.15.18-35
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-50
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-13
libpve-storage-perl: 5.0-41
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-25
pve-cluster: 5.0-36
pve-container: 2.0-37
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-19
pve-firmware: 2.0-6
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 2.12.1-3
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-50
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2
 

Attachments

  • Screen Shot 2020-05-02 at 15.18.46.png
    Screen Shot 2020-05-02 at 15.18.46.png
    359.1 KB · Views: 26
  • px.png
    px.png
    17.6 KB · Views: 25
From Host, to VM B, here's the ping result.

Code:
--- 103.136.x.x ping statistics ---
88 packets transmitted, 52 received, 40% packet loss, time 88281ms
rtt min/avg/max/mdev = 0.085/3.699/20.573/5.542 ms

But the iperf can handle only 1Gbps (from VM B to PVE Node).
iperf3 -c 103.136.x.x

Code:
Connecting to host 103.136.x.x, port 5201
[  4] local 103.136.x.x port 33222 connected to 103.136.x.x port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  3.47 MBytes  29.1 Mbits/sec    3   1.41 KBytes     
[  4]   1.00-2.00   sec  3.67 MBytes  30.8 Mbits/sec    1    175 KBytes     
[  4]   2.00-3.00   sec  64.7 MBytes   543 Mbits/sec   24   1.41 KBytes     
[  4]   3.00-4.00   sec   200 MBytes  1.68 Gbits/sec  258    652 KBytes     
[  4]   4.00-5.00   sec  31.2 MBytes   262 Mbits/sec    3   1.41 KBytes     
[  4]   5.00-6.00   sec  86.2 MBytes   724 Mbits/sec   33   1.21 MBytes     
[  4]   6.00-7.00   sec   135 MBytes  1.13 Gbits/sec    1   1.89 MBytes     
[  4]   7.00-8.00   sec   281 MBytes  2.36 Gbits/sec  135   2.77 MBytes     
[  4]   8.00-9.00   sec   248 MBytes  2.08 Gbits/sec   46   3.01 MBytes     
[  4]   9.00-10.00  sec  6.25 MBytes  52.4 Mbits/sec    2   1.41 KBytes     
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  1.03 GBytes   889 Mbits/sec  506             sender
[  4]   0.00-10.00  sec  1.02 GBytes   880 Mbits/sec                  receiver

Is it expected? The bandwidth should be more than 1Gbps
 
I'm expereincing painfully slow transfers as well. Did you have any issues running iperf3? I setup the server and the client but they never connect to each other. I can ping the machines and have turned off the firewall just in case. I'm trying to go from one proxmox node to another.

Looking forward to see if you are able to improve your speeds.
 
I'm expereincing painfully slow transfers as well. Did you have any issues running iperf3? I setup the server and the client but they never connect to each other. I can ping the machines and have turned off the firewall just in case. I'm trying to go from one proxmox node to another.

Looking forward to see if you are able to improve your speeds.
After some test, I've try to increase the MTU from 1500 to 9000, my iperf3 result reach 3-4Gb/s.
I've changed the multi-queue on interface from default to 8 (total vcpu on the vm).

Still see some packet lost at vm b from Client (blue line) to internet.
The utilization at vm b is still low if compared to iperf3 result.

Code:
--- 8.8.8.8 ping statistics ---
86 packets transmitted, 85 received, 1% packet loss, time 85107ms
rtt min/avg/max/mdev = 13.278/15.785/53.266/6.982 ms
 

Attachments

  • Screen Shot 2020-05-03 at 22.46.13.png
    Screen Shot 2020-05-03 at 22.46.13.png
    205.7 KB · Views: 23

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!