PacketLoss when using iperf3 (virtio)

phony12

New Member
Mar 30, 2018
1
1
1
54
Hello All,

We are experiencing packetloss when using iperf3 (TCP) from an vm using virtio.
We have used fedora/27 and ubuntu/16.04 as guest images.
Our proxmox cluster version is 5.1-42. We are using openvswitch.

Iperf3 server command: iperf3 -s
iperf3 client command: iperf3 -c <server> -t 60

vm: virtio, 2sockets 2cores, 8gb ram
proxmox_host: 2 sockets 48 core (xeon E5-2650L), 188gb ram, 2x broadcom 10Gbit
machine: 8 cores, 32 gb, 2x 10Gbit

[proxmox_host[vm]]->[machine]

iperf3 from proxmox_host to machine small loss.
iperf3 from vm to machine retransmits (iperf stats).

When we change the nic on the vm to e1000 (1Gb) there is no packetloss
When we change the nic on the vm to vmxnet3 (10Gb) there is little packetloss but the speeds are lower.


Code:
from proxmox_host to physical machine

[       ID]     Interval        Transfer        Bandwidth       Retr
[       4]      0,00-60,00      sec     65,6    GBytes  9,39    Gbits/sec       0       sender
[       4]      0,00-60,00      sec     65,6    GBytes  9,39    Gbits/sec       43      sender
[       4]      0,00-60,00      sec     65,6    GBytes  9,39    Gbits/sec       176     sender
                                65,6    GBytes          9,39    Gbits/sec       73      sender

vm to vm on the same proxmox host

[       ID]     Interval        Transfer        Bandwidth       Retr
[       5]      0,00-60,03      sec     62,3    GBytes  8,91    Gbits/sec       71102   sender
[       5]      0,00-60,03      sec     62,8    GBytes  8,98    Gbits/sec       75217   sender
[       4]      0,00-60,00      sec     50,5    GBytes  7,23    Gbits/sec       78157   sender
                                                   58,53 GBytes  8,3733 Gbits/sec       74825   sender

vm to physical machine e1000, vmxnet3, virtio
[       ID]     Interval        Transfer        Bandwidth       Retr
[       4]      0,00-60,00      sec     11,7    GBytes  1,67    Gbits/sec       8       sender  89 solo e1000
[       4]      0,00-60,00      sec     12      GBytes  1,72    Gbits/sec       1       sender
[       4]      0,00-60,01      sec     11      GBytes  1,58    Gbits/sec       0       sender
                                                   11,57 GBytes  1,66    Gbits/sec       3       sender

[       ID]     Interval        Transfer        Bandwidth       Retr
[       4]      0,00-60,00      sec     51,8    GBytes  7,42    Gbits/sec       4136    sender  89 solo vmware
[       4]      0,00-60,00      sec     58,2    GBytes  8,33    Gbits/sec       12487   sender
[       4]      0,00-60,00      sec     57,4    GBytes  8,22    Gbits/sec       12395   sender
                                                   55,8    GBytes  7,99    Gbits/sec       9673    sender

[       ID]     Interval        Transfer        Bandwidth       Retr
[       4]      0,00-60,00      sec     63,1    GBytes  9,03    Gbits/sec       113888  sender  89 solo virtio
[       4]      0,00-60,00      sec     64,8    GBytes  9,28    Gbits/sec       56746   sender
[       4]      0,00-60,00      sec     64,6    GBytes  9,25    Gbits/sec       53055   sender
                                                   64,17  GBytes  9,19    Gbits/sec       74563   sender


Are there any tests or suggestions as how to tackle this issue.


Thank you for your time,
 
  • Like
Reactions: novascore.io

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!