2 days benchmark KVM,VMware,Xen,Hyper-V

vkeven

Active Member
Dec 22, 2009
32
6
26
Hi , I'm still benchmarking KVM , this time I tried all 4 useable products and I discovered that the major limitation factor of KVM is probably the Network IO

Server Config : ML150 G5 Xeon5405 , 2GB , 250GB SATA RAID1 on Intel LSI1068E write-cache enabled

I installed every products on this server with a Windows 2008 32 bits VM to do my network and storage test , I did an iperf to test the network IO capacity ,hdtach, hdtune + a file write and file read over a files server to test real life.

Network IO Iperf

Host Native to files server = 940MB , almost wire speed

Xen = 680MB
Hyper-V = 620MB
VMware = 580MB
KVM/Proxmox = 370MB with e1000 and 300MB with VirtioNet

Disk IO Perf

Host Native with a rapid hdparm give me 73MB

Xen = 65MB
Hyper-V = 55MB
VMware = 50MB
KVM = 37MB ( stuck at Network IO Speed )

(I check and recheck with a file greater than my RAM to be sure)

someone's reach number like 500 to 600MB network speed with KVM/Proxmox?
 
So everyone's install KVM and don't care about performance ?
Hi!
No, performance is important - if you use the search-function you will find many posts (depends on the keywords).
You are right, that kvm "eat" performance - this is one of the reasons why many people love proxmox: you can also use openvz. But with kvm are many cases easier (external storage, different kernels, migration from/to physical host...).

My network-io-tests are better like yours: http://forum.proxmox.com/threads/2940-Bad-Network-Performance-with-KVM-Guests?p=16907#post16907
KVM-IO need (at this time) a lot cpu-power and use only one core. If your cpu has not enough power (at one core) this is the limited factor.

Udo
 
So everyone's install KVM and don't care about performance ?
I must confess that I love proxmox, but purely down to this I have moved to XenServer. I really miss proxmox's features like easy backup, moving between hosts without needing shared storage, web based GUI etc.

But, to be honest my virtualised DB performs significantly better on XenServer and the ability to template these machines so I can quickly create new machines is very compelling.

I want to come 'home' to proxmox, but until the KVM IO performance improves, I have no choice.
 
I must confess that I love proxmox, but purely down to this I have moved to XenServer. I really miss proxmox's features like easy backup, moving between hosts without needing shared storage, web based GUI etc.

But, to be honest my virtualised DB performs significantly better on XenServer and the ability to template these machines so I can quickly create new machines is very compelling.

I want to come 'home' to proxmox, but until the KVM IO performance improves, I have no choice.


Exactly my situation , I want the power of Debian/Proxmox like managing ups correctly , doing backup but performance bottleneck is a real problem for me here , I'm even ready to by a Redhat Virtualization licence just to obtain the 64 bits signed drivers but the performance is killing me so for now I will keep one Proxmox for my personnal use with a Xen 5.5 and time will tell if this will be installed at my customer's site.

Proxmox please keep your good work , this product will be a real VMware Essentials killer for SMB.
 
I don't think the RedHat drivers are that different from the open source ones are they? I did all my testing on 32bit OSes.
 
They are not really different but Windows x64 either 2008 or 7 need to have signed drivers or it will not survive a reboot , you can't used freely available drivers to install virtio block/net on Windows x64 because it will simply refuse to used it and you will have a boot failure at next reboot , you could active test mode and self-signed yourself the drivers but trust me you don't want to do this , at last resort or for testing you could press F8 at boot and choose "do not verify if drivers are signes" or something like this but you will need to do this at each boot.
 
local file server to host = 940MB on a 1000Base-T Managed HP Procurve Switch(brand new) so there is no network problems , and we talked here about a Xeon 5405 so there is nothing slow about cpu process power , also there is no elcheepo "all on one irq" desktop motherboard with realtek or marvell chip , is an HP Motherboard with server chipset and Broadcom network chip , maybe an intel chip could give better performance ? if the time permit it I will put an intel card just to find out
 
vkeven,

I think the problem is not with KVM, but rather with the availability of signed paravirtualized drivers for Windows 2008/7.

You don't precise but I think you are using paravirtualized drivers on Xen, ESX or Hyper-V for these tests ?

Such Network and block device drivers exist. They have been developed by Red hat, and released last September, and drivers signed by Microsoft last November. It is the virtio-win package. Unfortunately, it is not in the main repository, but in an optionnal one, which requires a license from Red Hat to access it, and they are not available from CentOS for example.

People that have tested these drivers have reported an improvement of x2 in Network and block device IO (see threads on this list). So I think the performances with KVM would not be very different from Xen, VMware, Hyper-V, if such drivers were used.

I agree with you, the fact that theses drivers are not easily available will prevent some people with windows machines to virtualize to use KVM. I thought myself to buy a RHEL 5.4 license, only to get the iso of these drivers...

But there is perphaps some hope. These drivers are GPLed, and there was recently an interesting discussions on Fedora mailing list, concerning the availability of these drivers :
http://lists.fedoraproject.org/pipermail/advisory-board/2010-January/007887.html

So perhaps next relesase of Fedora (May ?) will make these drivers available via download, and so one possibility would be to install a Fedora machine to get them ?

Alain
 
Yes I tested with signed drivers for Xen,Vmware and Hyper-V, Redhat closed access to those drivers and you can't not even download a trial version of the virtualization products , so you need to give 500$ just to figure it out , you can try Vmware fully fonctionnal , Xen (they even give XenDesktop for 10 Users) and Hyper-V is free but not Redhat , anyway Proxmox is more functionnal than a Redhat-KVM managed with a Windows console ( yes their interface are windows only ) , the only difference between signed and not signed drivers should be the part "approved by windows" , how much company sell windows drivers ? anyway open source is not like the old days and KVM is slow thats it .
 
anyway open source is not like the old days and KVM is slow thats it .

vkeven,

You cannot say that KVM per se is slow. I am sure that if you reproduce your tests with with a Linux machine, and virtio network and storage, you will have much better results.

But I agree that virtio-win drivers should be made available, as they are under GPL. And they could be improved. One thing I miss is for example to correctly shutdown the VM from interface. Video graphics could be better too...

Alain
 
vkeven,

You cannot say that KVM per se is slow. I am sure that if you reproduce your tests with with a Linux machine, and virtio network and storage, you will have much better results.

But I agree that virtio-win drivers should be made available, as they are under GPL. And they could be improved. One thing I miss is for example to correctly shutdown the VM from interface. Video graphics could be better too...

Alain
d

You're right I will do my homework , I will test with Linux VM and I will retest with an Intel Network chip , once done I will post my results
 
Ok some new test with a Debian Lenny 32 bit , first good news the virtio-net is WAY! better on Linux than Windows here the result

Network Speed:

proxmox host -> files server = 940MB ( wire speed )

vm debian lenny 32 -> files server = 700MB !!! 2 times the speed of windows virtio-net!

Disk Speed : ( RAW files disk )

hdparm on proxmox host = 73MB
hdparm on lenny vm = 32MB

to be sure I checked with bonnie++ , still 1/3 the perfo of host so similar to windows

proxmox host = proxmox 4G 42096 64 57380 14 25059 3 49341 65 52070 4 160.6 0
lenny vm = debian 1G 19082 42 17565 5 28044 7 49025 95 342213 53 11995 62

If someone got the Redhat Signed drivers I will be more than happy to test it , PM me
 
Last edited:
Network Speed is very good, but Disk IO is disappointing. Could you post the proxmox-ve configuration you used, particularly the kernel you used (proxmox 1.5, kernel 2.6.18 ?)

pveversion - v ?

Could you also post the results of Proxmox proper test :
pveperf -v ?

Alain
 
...

to be sure I checked with bonnie++ , still 1/3 the perfo of host so similar to windows

proxmox host = proxmox 4G 42096 64 57380 14 25059 3 49341 65 52070 4 160.6 0
lenny vm = debian 1G 19082 42 17565 5 28044 7 49025 95 342213 53 11995 62
...
Hi,
i have made one test for another purpose, and my disk-IO differ not so much:

On the proxmox-host:
Code:
proxmox1:/var/lib/vz/images# bonnie++ -u root -f -n 0 -r 8000 -d test
Version 1.03d       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
proxmox1        16G           222398  26 117693  10           390945  19 786.4   0
proxmox1,16G,,,222398,26,117693,10,,,390945,19,786.4,0,,,,,,,,,,,,,

On a linux-vm (1 cpu + 768MB Ram) with virtio-driver:
Code:
root@www:/var/mail # bonnie++ -u root -f -n 0 -r 8000 -d test
Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
www             16G           73689   9 104462  16           474612  43  1382   3
www,16G,,,73689,9,104462,16,,,474612,43,1382.1,3,,,,,,,,,,,,,

Udo
 
proxmox:~# pveversion -v
pve-manager: 1.5-8 (pve-manager/1.5/4674)
running kernel: 2.6.18-2-pve
proxmox-ve-2.6.18: 1.5-5
pve-kernel-2.6.18-2-pve: 2.6.18-5
qemu-server: 1.1-11
pve-firmware: 1.0-3
libpve-storage-perl: 1.0-10
vncterm: 0.9-2
vzctl: 3.0.23-1pve8
vzdump: 1.2-5
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm-2.6.18: 0.9.1-5

I don't know why but I received this from pveperf , apt-get update/upgrade same problem , anyway I used it just after installation and the FSYNC/s was around 1800-2000 so yes the write-back is enabled on my raid card

proxmox:~# pveperf -v
CPU BOGOMIPS: 18619.97
REGEX/SECOND: 737748
HD SIZE: 36.42 GB (/dev/pve/root)
BUFFERED READS: 75.43 MB/sec
AVERAGE SEEK TIME: 10.80 ms
rm: invalid option -- /
Try `rm --help' for more information.
open failed at /usr/bin/pveperf line 82.
proxmox:~#

there is only thing that I need to test , change RAW files to LVM volume for vm hard disk
 
...
proxmox:~# pveperf -v
CPU BOGOMIPS: 18619.97
REGEX/SECOND: 737748
HD SIZE: 36.42 GB (/dev/pve/root)
BUFFERED READS: 75.43 MB/sec
AVERAGE SEEK TIME: 10.80 ms
rm: invalid option -- /
Try `rm --help' for more information.
open failed at /usr/bin/pveperf line 82.
proxmox:~#
Hi,
do you have an alias assigned to rm?
pveperf run simply rm:
Code:
system ("rm -rf $dir");

Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!