Disk I/O Benchmarks of Proxmox / KVM Disk Types

C

Charlie

Guest
Which is faster disk type? QCOW2, RAW, VMDK? IDE or VirtIO? Cache or No Cache?

In order to identify the best disk type and bus method, we needed some rudimentary benchmarks. We've previously added some of this data in another thread; however, this has additional information and details, and is KVM (Proxmox) specific. There will be additional benchmark on their CPU usage coming soon.

Here are the KVM Disk Types utilized by Proxmox VE - http://www.imagehousing.com/image/813009. On the right column for each disk type, you will see a NO CACHE option. This means the flag CACHE=NONE was used for that particular configuration. To our surprise, the VMDK format with VirtIO was the overall "faster" setup in this test. This thread has more details on the subject on Caching - http://arstechnica.com/civis/viewtopic.php?f=16&t=1143694

The tests were performed on guests machines running Windows 2008 R2 SP1 x64. Each test configuration were performed on a clean / unattended setup (standard configuration) of the Operating System. Each OS has 1256 MB of RAM allocated. The hardware used was a low-end Dell Precision 2.0 GHz dual-core with 7200 RPM 500 GB Seagate Barracuda Drive. We intend to repeat these tests on newer server class machine by Intel when a spare becomes available. We would also like to run these test all concurrently; however, more resources are required.

Optional Link to the tests - http://i51.tinypic.com/158bcl4.gif

ProxmoxVEDisks.jpg
 
Last edited by a moderator:
  • Like
Reactions: noobkilervip
How did you make sure that your images are at the same disk location? Disk speed is highly dependent on sector location.
 
The best way this was ensured was each image / test was located on /VM test partition of 40 GB. Even taking that into consideration, it will not be bit by bit exact, but close enough for our test. This was for our benefit and we share only the results from our independent findings.
 
Last edited by a moderator:
Which is faster disk type? QCOW2, RAW, VMDK? IDE or VirtIO? Cache or No Cache?
... The hardware used was a low-end Dell Precision 2.0 GHz dual-core with 7200 RPM 500 GB Seagate Barracuda Drive.
Hi,
sorry that i say this, but why you waste your time with such benchmark? In your setup the bottleneck is the Sata-Drive. All good values depends on caching and are not very usefull (IMHO).
For good IO-Power in an virtualisation setup you need good io-performance. This can't be reached with a single sata-drive...
The fasted way should be lvm-storage (raw to the disk-lv). No filesystemlayer between.

Udo
 
True; however, we have two types of client. The SMB market and larger firms. In alot of instance, budget is a factor. So with a limited budget, we still need to find the best value (including configurations) for the client. Another example, we may look utilize pfSense firewall solution for SMB market vs a Juniper/Checkpoint for another. We try to find the best solution base on budget and product feature requirements.

If we had our choice, it would be Enterprise grade Solid State PCI-Express drives, but not all businesses can afford that. And we purposedly tested, the lowest and most limited end machine we could in order to see the "base" we could work with at those levels. Thus, for our purpose, it was not wasted time.
 
Last edited by a moderator:
...
If we had our choice, it would be Enterprise grade Solid State PCI-Express drives, but not all businesses can afford that...
Hi Charlie,
perhaps you have the chance to test the OCZ Vertex for your setup (120GB is not very large, of course). It's not enterprise grade, but i have good performance experienses with such a disk.
At this time i have tested the Vertex 2 only but next week i can do some tests with the Vertex 3 model.

You should also try to use lvm-storage on the single disk for test. But in a single-disk setup you can only reach that with handwork - the normal installer use the full disk for the lvm-volumegroup pve.

Udo
 
Which is faster disk type? QCOW2, RAW, VMDK? IDE or VirtIO? Cache or No Cache?

Here are the KVM Disk Types utilized by Proxmox VE - http://www.imagehousing.com/image/813009. On the right column for each disk type, you will see a NO CACHE option. This means the flag CACHE=NONE was used for that particular configuration. To our surprise, the VMDK format with VirtIO was the overall "faster" setup in this test.


Confirmed VMDK performed best.
 
Hi Charlie,
Unfortunately I can't get so high values even on the better HW (i5-2400 @ 3.1GHz, SATA6G, WD5002AALX).
-----------------------------------------------------------------------
CrystalDiskMark 3.0 B5 x64 (C) 2007-2010 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/sec [SATA/300 = 300,000,000 bytes/sec]

Sequential Read : 105.236 MB/s
Sequential Write : 12.530 MB/s
Random Read 512KB : 44.229 MB/s
Random Write 512KB : 12.552 MB/s
Random Read 4KB (QD=1) : 0.629 MB/s [ 153.6 IOPS]
Random Write 4KB (QD=1) : 0.455 MB/s [ 111.2 IOPS]
Random Read 4KB (QD=32) : 0.719 MB/s [ 175.6 IOPS]
Random Write 4KB (QD=32) : 0.503 MB/s [ 122.7 IOPS]

Test : 1000 MB [C: 32.7% (10.4/31.9 GB)] (x5)
Date : 2011/07/14 16:42:49
OS : Windows Server 2008 R2 Server Standard Edition (full installation) [6.1 Build 7600] (x64)
cache=none, ide

-----------------------------------------------------------------------
Sequential Read : 109.730 MB/s
Sequential Write : 7.031 MB/s
Random Read 512KB : 39.069 MB/s
Random Write 512KB : 7.048 MB/s
Random Read 4KB (QD=1) : 0.643 MB/s [ 157.0 IOPS]
Random Write 4KB (QD=1) : 0.473 MB/s [ 115.4 IOPS]
Random Read 4KB (QD=32) : 1.758 MB/s [ 429.2 IOPS]
Random Write 4KB (QD=32) : 0.466 MB/s [ 113.7 IOPS]

Test : 1000 MB [D: 0.8% (69.0/8189.0 MB)] (x5)
Date : 2011/07/14 17:21:49
OS : Windows Server 2008 R2 Server Standard Edition (full installation) [6.1 Build 7600] (x64)
cache=none, virtio

-----------------------------------------------------------------------
Sequential Read : 563.751 MB/s
Sequential Write : 100.757 MB/s
Random Read 512KB : 571.188 MB/s
Random Write 512KB : 85.092 MB/s
Random Read 4KB (QD=1) : 23.016 MB/s [ 5619.2 IOPS]
Random Write 4KB (QD=1) : 1.067 MB/s [ 260.5 IOPS]
Random Read 4KB (QD=32) : 29.012 MB/s [ 7083.1 IOPS]
Random Write 4KB (QD=32) : 1.073 MB/s [ 262.0 IOPS]

Test : 1000 MB [C: 32.6% (10.4/31.9 GB)] (x5)
Date : 2011/07/14 18:04:47
OS : Windows Server 2008 R2 Server Standard Edition (full installation) [6.1 Build 7600] (x64)
cache=writeback, ide


Every time, when I try to test the speed of the VIRTIO disk with 'cache=writeback', the virtual machine hangs.
DiskPerf-Proxmox.PNG

And what about a CPU load? When I test the speed of VIRTIO HDD, I a have very high load of the CPU, about 30-60%.
 

Attachments

  • DiskPerf-Proxmox.PNG
    DiskPerf-Proxmox.PNG
    116.5 KB · Views: 57
After further testing, we need to state the disk i/o test results above are ONLY for the Proxmox 1.8 version using kernel 2.6.32-4-pve.

This can significantly change upon changes in kernel or versions. For example, in the 1.8 release with kernel 2.6.32-4-pve, we found VMDK / IDE has excellent results; however, that is not the case with the current Proxmox release kernel 2.6.35-1-pve. See below.

http://accends.files.wordpress.com/2011/08/vmdkperf-kerneldifference.gif

VMDKPerf-KernelDifference.gif



Whenever you upgrade and/or go into production, make sure you test for the change as this was a significant change.
 
Last edited by a moderator:
I am in the midst of I/O testing various setting for Proxmox VM options, and even though this thread is a bit old, I thought I should contribute some of what I have done.

I am testing on a Dell Poweredge 2950 w/ 2x quad core Intel CPUs @ 2.67GHz, 24GB of 667MHz ram, and 6x 10k SAS drives in a RAID-5 on a Perc5 card running Proxmox 1.9 with CentOS 5 VM guests.

I initially found Proxmox as an alternative to a strictly OpenVZ setup (which I really liked for its insanely low overhead) so I was excited to see a pre-built option which included fully virtualized as well as OpenVZ.

I became even more excited when I read about the automatic fail-over of VMs in a cluster with DRBD, but became disheartened when I realised it was only for KVM guests, but I am interested enough that I wanted to see how much performance is lost when using KVM instead of OpenVZ.

I ran 5 sets of bonnie++ 1.03e on a (roughly) similar location on the harddrive for the physical host and each of the KVM IO options, IDE, SCSI and virtio, as well as OpenVZ.
To eliminate possible memory caching (which I think skews your data, Charlie) I turned off swap and created a 22.75GB memory file system and filled it (leaving 1.25GB effective memory (I set each VM to have 1GB of memory)).

Here is basically what I found:

Version1.03e ------Sequential Output------ --Sequential Input---Random-
-Per Chr---Block---Rewrite- -Per Chr---Block----Seeks--
MachineSizeK/sec%CPK/sec%CPK/sec%CPK/sec%CPK/sec%CP/sec%CP
bare machine4G65268.296.6298549.448110636.617.664705.688.2268427.222.6851.51.2
virtio4G63825.498200290.264.451144.415.664732.891207052.230.2683.963.2
% worse from bare metal0.98
1.010.671.340.460.891.001.030.771.340.802.67
SCSI4G60039.897.8197433.264.85251312.464079.892.2214331.623.6628.524
% worse from bare metal0.92
1.010.661.350.470.700.991.050.801.040.743.33
IDE4G59577.297.2210090.67448560.8106090590.6123826.28.4270.543.2
% worse from bare metal0.91
1.010.701.540.440.570.941.030.460.370.322.67
OpenVZ4G71408.695.6298347.248.292573.214.464182.481.4292892.421.4816.921.2
% worse from bare metal1.090.991.001.000.840.820.990.921.090.950.961.00

OpenVZ does great with IO; at worse it was 84% as fast as rewriting files and its CPU usage is about the same as the bare metal.
The virtio does a pretty good job at reading files, but it almost half as slow at updating files as OpenVZ and it uses a lot more CPU doing everything, especially random seeks.
the SCSI emulation is pretty comparable to the virtio, but the IDE is definite much farther behind.

This isn't a huge dataset, and it's by no means done, but I just thought I'd toss this out in case it was of interest.
I will be writing a blog post on my website about this soon & can link to it when I'm done if there is interest.

Thank you!
 
Last edited:
I am in the midst of I/O testing various setting for Proxmox VM options, and even though this thread is a bit old, I thought I should contribute some of what I have done.

I am testing on a Dell Poweredge 2950 w/ 2x quad core Intel CPUs @ 2.67GHz, 24GB of 667MHz ram, and 6x 10k SAS drives in a RAID-5 on a Perc5 card running Proxmox 1.9 with CentOS 5 VM guests.

I initially found Proxmox as an alternative to a strictly OpenVZ setup (which I really liked for its insanely low overhead) so I was excited to see a pre-built option which included fully virtualized as well as OpenVZ.

I became even more excited when I read about the automatic fail-over of VMs in a cluster with DRBD, but became disheartened when I realised it was only for KVM guests, but I am interested enough that I wanted to see how much performance is lost when using KVM instead of OpenVZ.

I ran 5 sets of bonnie++ 1.03e on a (roughly) similar location on the harddrive for the physical host and each of the KVM IO options, IDE, SCSI and virtio, as well as OpenVZ.
To eliminate possible memory caching (which I think skews your data, Charlie) I turned off swap and created a 22.75GB memory file system and filled it (leaving 1.25GB effective memory (I set each VM to have 1GB of memory)).

Here is basically what I found:

Version1.03e ------Sequential Output------ --Sequential Input---Random-
-Per Chr---Block---Rewrite- -Per Chr---Block----Seeks--
MachineSizeK/sec%CPK/sec%CPK/sec%CPK/sec%CPK/sec%CP/sec%CP
bare machine4G65268.296.6298549.448110636.617.664705.688.2268427.222.6851.51.2
virtio4G63825.498200290.264.451144.415.664732.891207052.230.2683.963.2
% worse from bare metal0.98
1.010.671.340.460.891.001.030.771.340.802.67
SCSI4G60039.897.8197433.264.85251312.464079.892.2214331.623.6628.524
% worse from bare metal0.92
1.010.661.350.470.700.991.050.801.040.743.33
IDE4G59577.297.2210090.67448560.8106090590.6123826.28.4270.543.2
% worse from bare metal0.91
1.010.701.540.440.570.941.030.460.370.322.67
OpenVZ4G71408.695.6298347.248.292573.214.464182.481.4292892.421.4816.921.2
% worse from bare metal1.090.991.001.000.840.820.990.921.090.950.961.00

OpenVZ does great with IO; at worse it was 84% as fast as rewriting files and its CPU usage is about the same as the bare metal.
The virtio does a pretty good job at reading files, but it almost half as slow at updating files as OpenVZ and it uses a lot more CPU doing everything, especially random seeks.
the SCSI emulation is pretty comparable to the virtio, but the IDE is definite much farther behind.

This isn't a huge dataset, and it's by no means done, but I just thought I'd toss this out in case it was of interest.
I will be writing a blog post on my website about this soon & can link to it when I'm done if there is interest.

Thank you!

alchemycs,

what was your command line for bonnie++ in these benchmarks?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!