KVM performance on different operating systems

I've run a quick test with Sandra out of curiosity. My system is a ProxMox 2.1 at the state when the 2.6.32-12-pve kernel was released:

running kernel: 2.6.32-12-pve
QEMU emulator version 1.0.50 (qemu-kvm-devel), Copyright (c) 2003-2008 Fabrice Bellard

My benchmarks are very comparable to the CentOS one you posted. Memory BW is generally higher, the CPU is a previous generation Xeon, with higher clock and weaker overall performance. The figures are around 5 to 15% weaker. I'd say it's normal, shows the performance I'd expect. The host was near idle at the time of testing. The guest is a Windows 7 x64 SP1. I can post my report if you want. Something's off with your setup that makes (only) Proxmox bork.
 
ok, so we are sure it's not disk related.

maybe cpu or ram.
- can you try to use "cpu64-rhel6" as cpu model ? (to be equal as centos)
- for memory, it could be related to transparent hugepage, so try the debian kernel 3.2 from backports

add

"deb http://backports.debian.org/debian-backports squeeze-backports main"

to /etc/apt/sources.list

then
#apt-get update
# apt-get install -t squeeze-backports linux-image-3.2.0-0.bpo.3-amd64

YES!!!

With this kernel I got exactly the same result as in CentOS.
The small area inside the graph is Proxmox with kernel 2.6.32-14-pve.

Thanks! :D
 

Attachments

  • Proxmox2-CentOS.png
    Proxmox2-CentOS.png
    81 KB · Views: 32
Last edited:
Interesting. While I'm glad to see you have found a solution in this particular case, it would be great if we found out what exactly causes the slowdown. Using a non-Proxmox released, unsupported kernel is a no-go in the long run. But I guess it's up to their developers now with repeatable test cases. If it doesn't get solved, it easily could become a distractor for future potential users.

Other than that, would it be possible to try your tests with the 2.6.32-12-pve kernel? If it runs fine for you then it might be some help for regression hunting.
 
YES!!!

With this kernel I got exactly the same result as in CentOS.

Thanks! :D

ok, so I think that it must be the disable transparent hugepage. (this is the only big difference with rhel6.3 kernel)

It was commited by openvz release, 042stab052.3,
so the last proxmox kernel version with transparent hugepage is pve-kernel-2.6.32-10-pve

could you try it to be sure ?

#apt-get install pve-kernel-2.6.32-10-pve
 
ok, so I think that it must be the disable transparent hugepage. (this is the only big difference with rhel6.3 kernel)

It was commited by openvz release, 042stab052.3,
so the last proxmox kernel version with transparent hugepage is pve-kernel-2.6.32-10-pve

could you try it to be sure ?

#apt-get install pve-kernel-2.6.32-10-pve

You are wrong.
I've tested the kernels 10 and 12 and I got exactly the same result as with kernel 14.
 
You are wrong.
I've tested the kernels 10 and 12 and I got exactly the same result as with kernel 14.

Damn,

can you provide

# cat /proc/meminfo | grep Huge
# cat /sys/kernel/mm/redhat_transparent_hugepage/enabled

for debian kernel and proxmox kernel when vm is running ?
 
Damn,

can you provide

# cat /proc/meminfo | grep Huge
# cat /sys/kernel/mm/redhat_transparent_hugepage/enabled

for debian kernel and proxmox kernel when vm is running ?
Code:
# uname -a
Linux hq-sr-v1 3.2.0-0.bpo.3-amd64 #1 SMP Thu Aug 23 07:41:30 UTC 2012 x86_64 GNU/Linux
# cat /proc/meminfo | grep Huge
AnonHugePages:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
# cat /sys/kernel/mm/redhat_transparent_hugepage/enabled
cat: /sys/kernel/mm/redhat_transparent_hugepage/enabled: No such file or directory
# cat /sys/kernel/mm/transparent_hugepage/enabled
always [madvise] never
Code:
# uname -a
Linux hq-sr-v1 2.6.32-14-pve #1 SMP Tue Aug 21 08:24:37 CEST 2012 x86_64 GNU/Linux
root@hq-sr-v1:~# cat /proc/meminfo | grep Huge
AnonHugePages:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
root@hq-sr-v1:~# cat /sys/kernel/mm/redhat_transparent_hugepage/enabled
always [never]
 
Code:
# uname -a
Linux hq-sr-v1 3.2.0-0.bpo.3-amd64 #1 SMP Thu Aug 23 07:41:30 UTC 2012 x86_64 GNU/Linux
# cat /proc/meminfo | grep Huge
AnonHugePages:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
# cat /sys/kernel/mm/redhat_transparent_hugepage/enabled
cat: /sys/kernel/mm/redhat_transparent_hugepage/enabled: No such file or directory
# cat /sys/kernel/mm/transparent_hugepage/enabled
always [madvise] never
Code:
# uname -a
Linux hq-sr-v1 2.6.32-14-pve #1 SMP Tue Aug 21 08:24:37 CEST 2012 x86_64 GNU/Linux
root@hq-sr-v1:~# cat /proc/meminfo | grep Huge
AnonHugePages:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
root@hq-sr-v1:~# cat /sys/kernel/mm/redhat_transparent_hugepage/enabled
always [never]

Damn, so it's not related to hugepage...

I'm out of idea for now....
maybe can you try changing io scheduler (deadline for exemple), maybe cfq scheduler on proxmox use more cpu (they are some patch on it) .
 
Looking at the first post, you use Intel barebones. Could it be possible you aren't running on latest firmware? Maybe that makes a difference. (I had odd behaviour in one case that went away after upgrading.) Have you tried with HT off/on? Might not be related, but who knows.
 
Looking at the first post, you use Intel barebones. Could it be possible you aren't running on latest firmware? Maybe that makes a difference. (I had odd behaviour in one case that went away after upgrading.) Have you tried with HT off/on? Might not be related, but who knows.

First of all I've updated all firmwares.
What is 'HT'?
 
YES!!!

With this kernel I got exactly the same result as in CentOS.
The small area inside the graph is Proxmox with kernel 2.6.32-14-pve.

Thanks! :D

I´m also trying this kernel now...since I use KVM only...and I also noticed little performance degration since I migrated from 1.9 to 2.1...we will see...
 
I cant really reproduce the behavior (pve is always fast here). Anyone can reproduce the slow behavior (besides docent)?


Test with iometer (windows 2008R2 standard - no updates/2GB RAM/1socket-2cores-cpu-rhel6/virtio disk-raw-32GB):

Maximum disk size: 8388608 sectors
#of Outstanding I/Os: 64 per target
Run time: 20s

--------------------------------------------------------------------------------------
RANDOM TEST: Acess spec.: 16KB, Random 100%, Read 60%, Write 40%
--------------------------------------------------------------------------------------

a) 2.6.32-14-pve ~= linux-image-3.2.0-0.bpo.3-amd64, cache: none
Total I/O per second: 1164
Read MB/s: 11,89
Average I/O resp.: 54ms
Maximum I/O resp: 377ms
CPU % total: 6
Write MB/s: 7,7

b1) linux-image-3.2.0-0.bpo.3-amd64, cache: write-through
Total I/O per second: 7216
Read MB/s: 67
Average I/O resp.: 9ms
Maximum I/O resp: 73ms
CPU % total: 20
Write MB/s: 45

b2) 2.6.32-14-pve, cache: write-through
Total I/O per second: 395
Read MB/s: 3,7
Average I/O resp.: 161ms
Maximum I/O resp: 339ms
CPU % total: 2
Write MB/s: 2,5

--------------------------------------------------------------------------------------
SEQ. TEST: Acess spec.: 4KB 100% SEQ, Read 100%, 0% Write
--------------------------------------------------------------------------------------

a) 2.6.32-14-pve ~= linux-image-3.2.0-0.bpo.3-amd643 cache: none
Total I/O per second: 13k
Read MB/s: 51
Average I/O resp.: 4.8ms
Maximum I/O resp: 192ms
CPU % total: 29

b1) linux-image-3.2.0-0.bpo.3-amd643, cache: write-through
Total I/O per second: 38k
Read MB/s: 155
Average I/O resp.: 1.6ms
Maximum I/O resp: 108ms
CPU % total: 33

b2) 2.6.32-14-pve3, cache: write-through
Total I/O per second: 29k
Read MB/s: 117
Average I/O resp.: 2.1ms
Maximum I/O resp: 24ms
CPU % total: 51

Write-through seems to be better with kernel linux-image-3.2.0-0.bpo.3-amd643.

edit: i run the tests again today and i can't reproduce above results (b1 - random test). I guess there was some glitch in my testing procedure.
I also run Sisoft sandra as docent did and both kernels performs the same.
 
Last edited:
I have made tests with linux guests and pve-kernel. My tests also showed that write-through was given a considerable performance boost over the recommend cache policy: none.
this is normal, writethrough use host cache for read.
What we need to compare is 2 differents kernel with same cache parameter.
 
this is normal, writethrough use host cache for read.
What we need to compare is 2 differents kernel with same cache parameter.
Yes, but the official recommendations for proxmox is still to use nocache but this is clearly only true if the images is storage on a storage with hardware raid.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!