VMs not running on full Gigabit speeds?

jolebole

Renowned Member
Feb 7, 2016
12
0
66
46
Hello all,

I ran some iperf tests from my desktop to the proxmox VMs and in between VMs and the tests are not as I expected.

iperf test from Fedora workstation to proxmox physical Nic is close to gigabit

Client connecting to 10.0.1.7, TCP port 5001
TCP window size: 0.08 MByte (default)
------------------------------------------------------------
[ 3] local 10.0.1.13 port 44882 connected with 10.0.1.7 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 114 MBytes 956 Mbits/sec
[ 3] 1.0- 2.0 sec 112 MBytes 942 Mbits/sec
[ 3] 2.0- 3.0 sec 112 MBytes 940 Mbits/sec
[ 3] 3.0- 4.0 sec 113 MBytes 945 Mbits/sec
[ 3] 4.0- 5.0 sec 112 MBytes 941 Mbits/sec


from workstation to pfSense VM shows only half a gigabit

Client connecting to 10.0.1.1, TCP port 5001
TCP window size: 0.08 MByte (default)
------------------------------------------------------------
[ 3] local 10.0.1.7 port 51204 connected with 10.0.1.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 52.8 MBytes 442 Mbits/sec
[ 3] 1.0- 2.0 sec 49.5 MBytes 415 Mbits/sec
[ 3] 2.0- 3.0 sec 50.6 MBytes 425 Mbits/sec
[ 3] 3.0- 4.0 sec 50.5 MBytes 424 Mbits/sec
[ 3] 4.0- 5.0 sec 49.2 MBytes 413 Mbits/sec

From a CentOS VM to the pfSense VM is less then half a gig

Client connecting to 10.0.1.1, TCP port 5001
TCP window size: 0.08 MByte (default)
------------------------------------------------------------
[ 3] local 10.0.1.7 port 51204 connected with 10.0.1.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 52.8 MBytes 442 Mbits/sec
[ 3] 1.0- 2.0 sec 49.5 MBytes 415 Mbits/sec
[ 3] 2.0- 3.0 sec 50.6 MBytes 425 Mbits/sec
[ 3] 3.0- 4.0 sec 50.5 MBytes 424 Mbits/sec
[ 3] 4.0- 5.0 sec 49.2 MBytes 413 Mbits/sec


Fromt CentOS VM to the physical proxmox NIC is kinda variable but still far from full gigabit

Client connecting to 10.0.1.7, TCP port 5001
TCP window size: 0.08 MByte (default)
------------------------------------------------------------
[ 3] local 10.0.1.10 port 40900 connected with 10.0.1.7 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 83.4 MBytes 699 Mbits/sec
[ 3] 1.0- 2.0 sec 84.2 MBytes 707 Mbits/sec
[ 3] 2.0- 3.0 sec 65.5 MBytes 549 Mbits/sec
[ 3] 3.0- 4.0 sec 69.4 MBytes 582 Mbits/sec
[ 3] 4.0- 5.0 sec 61.0 MBytes 512 Mbits/sec

I change the VM Nic drivers from VirtIO to Intel and I got the same iperf results

A 4GB file copy test from a Synology NAS to my Windows 2012 VM shows 105MB/s transfer (looks about right for gigabit speeds)
Same file copy test from my FreeNAS shows about the same speed. Then why is iperf showing slower speeds?

My hardware:
First Proxmox host is on a Supermicro 1U Xeon server with 8Gb Ram, Dual intel Gig NICs
- pFSense VM has 1cpu 4 cores with 4Gb Ram
Second Proxmox host is a Dell R610 Dual Xeon 1U server with 64Gb Ram and 4 Broadcaom Gig NICS
- most VMs are runing Intel drivers, some VirtIO

My physical network runs on a Ubiquity 24port Gigabit PoE Edge Switch. All ports have jumbo frames enabled (9000)
 
Last edited:
I cannot reproduce those results: from FreeBSD VM to Linux VM I got

------------------------------------------------------------
[ 3] local 192.168.16.24 port 61552 connected with 192.168.16.75 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-60.0 sec 180 GBytes 25.8 Gbits/sec

using virtio drivers

if you're in bridge most remember the host cpu can be a bottleneck, as the host cpu has to inspect *every* frame coming on the bridge to know the destination
 
More testing. I installed 3 test VMs. One with sata and intel drivers, one with both virtio on the local raid volume and one with both virtio on a nsf share on freenas. So far the virtio driver VMs have the best network performance. I will perform more tests today on a second server with dual Xeon E5620s. My servers are not latest and greatest 8core CPUs, but I am just testing network performance. How fast I need to go to get Gigabit throughput, let alone 10GbE?

What hardware you are running? Those results that you are getting are insane! :eek:
 
Last edited:
the VirtIO nic is a 10 Gbit nic, so network traffic between two VMs on the same node should reach that speed.
no Network hardware is involved when doing so it is only work for the host CPU
 
I managed to get full Gigabit speeds between two Windows server VMs on the host, but not faster, even though the NICs show as 10Gbits. I wiped the server completely and install HyperV with two Windows Server VMs and between them I could get 1.8Gbits average with a peak of 2.2Gbits. Because as you said no network hardware is involved in this, can I have a hardware limitation on the PCI-e bus? or the Perc card ? the R610 has 6 SAS drives in Raid5 and the R710 has 4 1TB sata in Raid 10. Something is definitely a bottleneck. I have two 10GbE fiber cards from my Proxmox to the FreeNas and between those physical NICs I get the full 10GbE speed.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!