KVM network performance: latency vs throughput, do I have to decide?

hverbeek

Member
Feb 14, 2011
40
1
8
I just did some tests on PVE 1.7, trying out combinations of Kernels 2.6.32-4 and 2.6.35-1 with virtio NIC vs e1000 NIC. The objective of the test was to find the best possible network performance between (debian lenny amd64) KVM guests.

Latency was tested with: ping -c 10000 -f <otherGuest>
Throughput was tested with: iperf --format m --time 60 --client <otherGuest>
System under test:

  • 1x KVM host, running PVE 1.7, 4-core Intel i7-920, 12GB RAM
  • 2x KVM guest, running on the one KVM host, 1 CPU, 2GB RAM, virtio block device, cache=none

My conclusions from these very simple tests are:

  • The choice of kernel did not seem to make much difference (I only observed a slightly better latency in 2.6.35 by about 10%)
  • Latency is ~3 times better with e1000
  • Throughput is ~3 times better with virtio
  • KVM guests were fully CPU-bound during iperf tests (as expected)
  • KVM guest --> KVM Host throughput is very bad with e1000 (~300Mbit/s), but excellent with virtio (~2.1Gbit/s)
Typical values:

  • e1000: latency 0.300ms; throughput 600Mbit/s
  • virtio: latency 0.850ms; throughput 1.8Gbit/s

Now my question is: do I have to choose between good latency and good throughput? Or can I somehow improve the latency of virtio, making it the network emulation of choice? Thanks for any advice!

PS: let me know if you want more details about the test setup
 
we are in the middle of KVM 0.14 testing and we see significant improvements here (virtio) - ping is still a bit better with e1000 but just a little bit, maybe not really true due to this to simple testing.

I suggest you go for virtio and as soon as we released 0.14 to pvetest you should re-test. we will release the 0.14 packages as soon as we are ready with them :)
 
Here are some simple benchmark results from the KVM 0.14 (pvetest), done on a Intel Modular Server.

Tests are done from Ubuntu KVM guest to Ubuntu KVM guest, running on the same blade with 2.6.35 and KVM 0.14 and using virtio nic´s:

  • Latency/ping: 0.10 ms
  • Iperf: 10,2 Gbit/s

Great results and improvements!
 
Last edited:
same tests with e1000:

  • Latency/ping: 0.19 ms
  • Iperf: 1,27 Gbit/s

virtio is the winner.
 
yes, virtio. I just added it to the post.
 
Did the same but with 2.6.32.

e1000: ping 0.15; iperf 905 Mbits/sec
Virtio: ping 0.10; iperf 4.2 Gbits/sec

so virtio is also the way to go for Linux KVM guests on 2.6.32.
 
If we already have e1000 nics on CentOS KVM's what is the best way to switch to virtio?
 
Did the same but with 2.6.32.

e1000: ping 0.15; iperf 905 Mbits/sec
Virtio: ping 0.10; iperf 4.2 Gbits/sec

so virtio is also the way to go for Linux KVM guests on 2.6.32.
Hi,
i had trouble with the virtio: one direction fast, other way very slow (meassured with iperf):

host-a ------> host-b -> vm

All OS uptodate
host-a: proxmox 1.7
host-b: proxmox 1.7 pvetest
vm: squeeze kvm

e1000:
host-a -> vm: 1.32 GBits/s
vm -> host-a: 536 MBits/s

virtio:
host-a -> vm: 1.6 GBits/s
vm -> host-a: 240 KBits/s

host to host ist fast (10GB):
host-a -> host-b: 3.85 Gbits/s
host-b -> host-a: 4.39 Gbits/s

vm to the host (and vice versa) is also fast:
vm -> host-b: 5.34 Gbits/s
host-b -> vm: 6.36 Gbits/s

Due to the fact, that the VM must operate with many hosts in the network i use e1000!
Or can there be an issue, which i hasn't see?

Udo
 
yes, but do not forget to use the same MAC address for the new virtio nic.
 
Well, that is only necessary if you don't want to wait 10 minutes or so for ARP caches to expire... Or if host has a static dhcp lease, no?
 
True, but most guests have only one NIC. Or would the udev stuff leave a stale and dead eth0 and give the guest eth1? I have to admit I don't know the udev stuff as well as I'd like...
 
True, but most guests have only one NIC. Or would the udev stuff leave a stale and dead eth0 and give the guest eth1? I have to admit I don't know the udev stuff as well as I'd like...
Hi,
yes udev rename the nic - so after first changing the mac-address you got eth1 instead of eth0 - the next time eth2... And if you use eth0 in /etc/network/interfaces you don't get an network-connection. But you can simply remove the entry in the udev-rule and after a reboot you got the eth0 back.

Udo
 
...
virtio:
host-a -> vm: 1.6 GBits/s
vm -> host-a: 240 KBits/s
..
Due to the fact, that the VM must operate with many hosts in the network i use e1000!
Or can there be an issue, which i hasn't see?

Udo

I just did more test with my KVM guests, just to make sure that I got the same setup.

I have two servers (IMS blades), running 2.6.32 (pvetest with kvm 0.14), KVM virtio network. On the second blade I run a KVM guest, e.g. Ubuntu 10.10. now, I am doing iperf tests from this guest to the first blade.

results in both directions: 940 Mbits/sec (I have only Gbit LAN in my IMS so its just perfect)

so where is the difference in our tests?
 
Sorry to be asking this again, but what are your guests running?

I just tested my setup here again after upgrading to pvetest (pve-kernel-2.6.35-1-pve: 2.6.35-10, pve-qemu-kvm: 0.14.0-2). My guests are running Debian 5 lenny amd64 (2.6.26-2-amd64). Now I see no difference between virtio and e1000, in both cases latency is about 0.280msec and iperf measures about 670Mbit/s.
Is it a case of too-old virtio modules in the client?

Edit: After lifting my guests to squeeze (2.6.32-5-amd64), network performance is excellent. latency of 0.180msec and iperf bandwidth of 5.9Gbit/s, I'm happy!
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!