Iperf3 @ 10GB/s! However, SCP Speed at 1GB/s ??!!

Zubin Singh Parihar

Well-Known Member
Nov 16, 2017
66
12
48
44
Hi,

I have a DEBIAN (11) and CentOS 7 VM.
The host Proxmox PVE has a 10GB ethernet and I'm using OpenVswitch (OVS Bridge, OVS Port and OVS intPort). All the VMs are running VirtIO drivers for their NIC

On my Debian and CentOS VM's, SCP transfers between VM's are only going at 1GBs and ethtool does not show the Speed:
$ ethtool -i eth0
driver: virtio_net
version: 1.0.0
firmware-version:
expansion-rom-version:
bus-info: 0000:00:12.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no

CentOS Output:
$ ethtool -i eth0
driver: virtio_net
version: 1.0.0
firmware-version:
expansion-rom-version:
bus-info: 0000:00:12.0
supports-statistics: no
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no


However, Iperf3 transfers go at 10GBs.

IPERF3
sparky@proxmox-vm:~$ sudo iperf3 -c 192.168.102.4 -t 20
Connecting to host 192.168.102.4, port 5201
[ 5] local 192.168.102.2 port 44324 connected to 192.168.102.4 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 862 MBytes 7.23 Gbits/sec 586 1.67 MBytes
[ 5] 1.00-2.00 sec 899 MBytes 7.54 Gbits/sec 181 1.67 MBytes
[ 5] 2.00-3.00 sec 1.09 GBytes 9.37 Gbits/sec 32 1.74 MBytes
[ 5] 3.00-4.00 sec 979 MBytes 8.22 Gbits/sec 1 1.77 MBytes
[ 5] 4.00-5.00 sec 881 MBytes 7.39 Gbits/sec 1 1.77 MBytes
[ 5] 5.00-6.00 sec 796 MBytes 6.68 Gbits/sec 135 1.77 MBytes
[ 5] 6.00-7.00 sec 1.38 GBytes 11.9 Gbits/sec 1430 2.00 MBytes
[ 5] 7.00-8.00 sec 966 MBytes 8.11 Gbits/sec 1 2.00 MBytes
[ 5] 8.00-9.00 sec 1.29 GBytes 11.1 Gbits/sec 86 2.22 MBytes
[ 5] 9.00-10.00 sec 1.70 GBytes 14.6 Gbits/sec 0 2.29 MBytes
[ 5] 10.00-11.00 sec 1.24 GBytes 10.6 Gbits/sec 0 2.29 MBytes
[ 5] 11.00-12.00 sec 1.93 GBytes 16.6 Gbits/sec 1 2.29 MBytes
[ 5] 12.00-13.00 sec 1.38 GBytes 11.9 Gbits/sec 0 2.29 MBytes
[ 5] 13.00-14.00 sec 910 MBytes 7.64 Gbits/sec 45 2.29 MBytes
[ 5] 14.00-15.00 sec 861 MBytes 7.22 Gbits/sec 0 2.29 MBytes
[ 5] 15.00-16.00 sec 1009 MBytes 8.46 Gbits/sec 45 2.29 MBytes
[ 5] 16.00-17.00 sec 979 MBytes 8.20 Gbits/sec 0 2.30 MBytes
[ 5] 17.00-18.00 sec 891 MBytes 7.48 Gbits/sec 0 2.30 MBytes
[ 5] 18.00-19.00 sec 869 MBytes 7.29 Gbits/sec 45 2.31 MBytes
[ 5] 19.00-20.00 sec 765 MBytes 6.42 Gbits/sec 46 2.31 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-20.00 sec 21.4 GBytes 9.19 Gbits/sec 2635 sender
[ 5] 0.00-20.00 sec 21.4 GBytes 9.19 Gbits/sec receiver

iperf Done.

(where 192.168.102.4 is the CentOS 7 VM)


SCP
sparky@proxmox-vm:~$ scp CentOS-7-x86_64-DVD-2207-02.iso root@192.168.102.4:/distrosPassword: CentOS-7-x86_64-DVD-2207-02.iso 10% 489MB 47.0MB/s 01:25 ETA


What do I need to do to:
  1. Display the Speed capacity of eth0?
  2. Get the VM's transferring at SCP at 10GB/s?
 
[ 5] 9.00-10.00 sec 1.70 GBytes 14.6 Gbits/sec 0 2.29 MBytes

Do you have link aggregation with dual 10Gb NICs? Because if you don't have that or something faster, that (those) numbers aren't completely accurate. Nice speeds, though. Average 9.19 Gb/s [1.14875 GB/s] between the two network interfaces.

This should help you with your first question and give you some information you will need for the following.

Now consider bus bandwidth, controller bandwidth, drive bandwidth, CPU overhead for encryption (your SCP transfer is encrypted, which takes time, which slows things down). Faster SAS and SATA drives are 6Gb/s theoretical maximum with no seeking, faster NVMe connected via the PCIe bus should probably keep up. There's lots of potential bottlenecks and with thorough testing (google [thing] [bandwidth] benchmarking debian) you can probably figure out which one is the slowest link, but it takes time.