Guest Trafic speed issue

kev974

Active Member
Jul 12, 2016
31
1
28
35
Hello everyone,

I'm doing some test with pve 5.2-1 and facing some issue.
I've configured a 3 nodes cluster with 10G intel x520 T2 Dual and connected to a EMC Unity full flash Array.

I've done IPERF test between nodes and the result were pretty good :

------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.198.215 port 5001 connected with 192.168.198.216 port 55658
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 11.5 GBytes 9.88 Gbits/sec


I've configured 3 VMs Windows Server 2016 and Windows server 2012R2 with Virtio 0.1.141 which seems to be the latest stable (i tried 0.1.160 but had blue screens ).

VM1 is on Node 1 ans VM2 is on Node 2 and their Nic are connected to vmbr198 by VirIO and their network cards inside guest OS are showing 10Gbit/s speed.

When copying files from VM1-Disk1 to VM1-Disk2 speed is correct (500Mo/s) but when copying from VM1:Disk1 to VM2:DiskX speed is about 80 Mo/s.
Even if doing iperf from VM1 to VM2 shows 820Mbit/s (screenshot joined).

I've tried doing some tests by disabling lro,lso in Nodes and guest VM but nothing more than 120Mo/s.

I've tried with linux VM and seem to be the same.

I've tried to put both VM iniside same node but iperf still shows 880 Mbit/s.
Iptraf showing traffic passing trhroug enp65s0f0 which is 10Gb Nic

What am I doing wrong?

Have someone ever faced this problem?

Here is my /etc/nework/interfaces :

auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet static

auto eno3
iface eno3 inet static
address 10.10.0.216
netmask 255.255.255.0

auto eno4
iface eno4 inet static
address 10.10.1.216
netmask 255.255.255.0

auto enp3s0f0
iface enp3s0f0 inet static
address 192.168.201.216
netmask 255.255.255.0
mtu 9000

auto enp3s0f1
iface enp3s0f1 inet static
address 192.168.202.216
netmask 255.255.255.0
mtu 9000

auto enp65s0f1
iface enp65s0f1 inet manual
mtu 9000

auto enp65s0f0
iface enp65s0f0 inet manual
mtu 9000

auto vlan1
iface vlan1 inet manual
vlan_raw_device enp65s0f0
mtu 9000

auto vlan198
iface vlan198 inet manual
vlan_raw_device enp65s0f0
mtu 9000

auto vlan213
iface vlan213 inet manual
vlan_raw_device enp65s0f0
mtu 9000

auto vmbr0
iface vmbr0 inet static
address 192.168.97.216
netmask 255.255.255.0
gateway 192.168.97.250
bridge_ports eno1
bridge_stp off
bridge_fd 0

auto vmbr1
iface vmbr1 inet manual
bridge_ports vlan1
bridge_stp off
bridge_fd 0
mtu 9000

auto vmbr2
iface vmbr2 inet static
address 192.168.199.216
netmask 255.255.255.0
bridge_ports enp65s0f1
bridge_stp off
bridge_fd 0
mtu 9000

auto vmbr198
iface vmbr198 inet static
address 192.168.198.216
netmask 255.255.255.0
bridge_ports vlan198
bridge_stp off
bridge_fd 0
mtu 9000

auto vmbr213
iface vmbr213 inet manual
bridge_ports vlan213
bridge_stp off
bridge_fd 0
mtu 9000

Thank you for your help.
 

Attachments

  • iperf.PNG
    iperf.PNG
    31.7 KB · Views: 5
  • iptraf.PNG
    iptraf.PNG
    21.1 KB · Views: 4
When copying files from VM1-Disk1 to VM1-Disk2 speed is correct (500Mo/s) but when copying from VM1:Disk1 to VM2:DiskX speed is about 80 Mo/s.
Even if doing iperf from VM1 to VM2 shows 820Mbit/s (screenshot joined).

Bandwidth of disk access does not depend on network speed only but also on type of (physical) disk, type of storage, type of cache you use etc. First it has to be figured out where the bottleneck is.
 
Bandwidth of disk access does not depend on network speed only but also on type of (physical) disk, type of storage, type of cache you use etc. First it has to be figured out where the bottleneck is.

Thanks for your answer Richard,

I've tried all type of cache and writeback seems to be the best but from one VM to another speed is still about 1gbps.I tried ide and virtio with latest stable driver 0.1.141.

So Speed from local VM disk to local VM disk is about 500Mo/s
Speed VM inside the same host is about 80Mo/s
Speed VM on different nodes is about 80 Mo/s

Iperf between nodes is about 10Gb/s
I tried Move disk is about 300Mo/s

I think issue is linked to network emulation (Virtio?) or something about host managing VM network traffic.

Do you see anything more to check?.

Thank you
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!