[SOLVED] Assymetric network performances

samontetro

Active Member
Jun 19, 2012
78
2
28
Grenoble, France
Hi,
I'm running some network bandwidth tests on an Alma Linux VM (RHEL8) and I get assymetric performances.
Test is run with netcat:
server: nc -l 1122 -v >/dev/null
client: dd if=/dev/zero bs=4M count=1024 |nc robin 1122
The server robin is not virtualized (physical host with CentOS7). Same subnet, same VLAN, 10Gb eth for all.
In the VM, I use virtio with one queue (VM is small: 2 cores, 4GB RAM, minimal linux install).
Firewall is stopped for the tests.

Writing from my CentOS7 server to the Alma Linux VM shows an average of 6.4 Gbits/s
Writing from the Alma Linux VM to the CentOS7 server shows an average of 3.4 Gbits/s

What could explain this ?
Testing between two physical CentOS7 servers (including robin server) always shows an average of 6.7 Gbits/s
Proxmox VE is 4.4 (old, I know, but in production...)

Code:
# ethtool -k ens18
Features for ens18:
rx-checksumming: on [fixed]
tx-checksumming: on
    tx-checksum-ipv4: off [fixed]
    tx-checksum-ip-generic: on
    tx-checksum-ipv6: off [fixed]
    tx-checksum-fcoe-crc: off [fixed]
    tx-checksum-sctp: off [fixed]
scatter-gather: on
    tx-scatter-gather: on
    tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
    tx-tcp-segmentation: on
    tx-tcp-ecn-segmentation: on
    tx-tcp-mangleid-segmentation: off
    tx-tcp6-segmentation: on
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off [fixed]
rx-vlan-offload: off [fixed]
tx-vlan-offload: off [fixed]
ntuple-filters: off [fixed]
receive-hashing: off [fixed]
highdma: on [fixed]
rx-vlan-filter: on [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: on [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-gre-csum-segmentation: off [fixed]
tx-ipxip4-segmentation: off [fixed]
tx-ipxip6-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
tx-udp_tnl-csum-segmentation: off [fixed]
tx-gso-partial: off [fixed]
tx-tunnel-remcsum-segmentation: off [fixed]
tx-sctp-segmentation: off [fixed]
tx-esp-segmentation: off [fixed]
tx-udp-segmentation: off [fixed]
tx-gso-list: off [fixed]
rx-gro-list: off
tls-hw-rx-offload: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off [fixed]
rx-fcs: off [fixed]
rx-all: off [fixed]
tx-vlan-stag-hw-insert: off [fixed]
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: off [fixed]
l2-fwd-offload: off [fixed]
hw-tc-offload: off [fixed]
esp-hw-offload: off [fixed]
esp-tx-csum-hw-offload: off [fixed]
rx-udp_tunnel-port-offload: off [fixed]
tls-hw-tx-offload: off [fixed]
rx-gro-hw: off [fixed]
tls-hw-record: off [fixed]


Thanks
Patrick
 
Last edited:
I would test with iperf3 in both ends to discard any limit related to your /dev/zero, dd, or netcat. Also, I would do the same tests (nc and iperf3) among the Proxmox host and "robin". Thay may help pinpoint where to look next for the bottleneck.

You should get higher numbers for a 10Gb network. These are the speeds I usually get:

Code:
9.90 Gbits/sec – 3556K/975 us

Command used is

Code:
client: iperf -c SERVER_IP -i 1 -P 1 -t 20 -e

server: iperf -s -i 1
 
Hi VictorSTS.

Thanks for suggesting iperf3 to run more tests. With Iperf3 the performance ratio between the 2 directions is very low.
Using one queue only on the VM interface:
VM → remote server​
remote server → VM​
min : 8.46 Gbits/sec​
Min : 7.04 Gbits/sec​
max : 8.54 Gbits/sec​
Max : 9.40 Gbits/sec​
Avg : 8.51 Gbits/sec​
Avg : 8.21 Gbits/sec​

And using 2 queues on the VM interface (2 cores allowed for the VM):
remote server → VM​
Min 7.04 Gbits/sec​
Max : 9.37 Gbits/sec​
Avg : 8.38 Gbits/sec​

Moreover, if from my server I use 2 threads for the iperf3 client (iperf3 -c mostprovi -i 1 -P 2 -t 20) with 2 queues enabled on the VM interface:
remote server → VM​
Min : 9.20 Gbits/sec​
Max : 9.41 Gbits/sec​
Avg : 9.36 Gbits/sec​

So it looks like it was a /dev/zero or dd limitation with my basic investigation.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!