Issue with Kernel 2.6.35-1-pve

Udo,

It is the same exact host and guest vm machine I am using. The only difference or change is a reboot and switching between 2.6.32 and 2.6.35 and running the test again. Same results every time. Very simple test and screenshots says it all.
 
Udo,

It is the same exact host and guest vm machine I am using. The only difference or change is a reboot and switching between 2.6.32 and 2.6.35 and running the test again. Same results every time. Very simple test and screenshots says it all.
Hi Charlie,
i'm dissent from you about this performance-test (i would not say that screenshots say alls)
I have tried CDM (with 2.6.35) and the behavior of this program ist strange: If i start the progam (without testing) it's take 96% CPU-Power...
If i compare the values with h2benchw (which i use on windows) CDM shows different values.
Again - i can only get good performance, if i had this on the pve-host. If i reach more performance than the underlaying IO-system support, i measure caching. This can be nice, but has nothing to do with measuring!
And of course takes IO with kvm cpu-power - but this is not different to esx.

cdm.jpg
I can't do the same test with 2.6.32 (the host is in production) but it show's that i don't have an performance-problem with 2.6.35.
Indeed i don't believe CDM very much.

Udo
 
Charlie, do you get the same results/findings with hdtune?
 
Again, as Udo posted you are benchmarking caches. in the first screenshot of your hdtune results you are showing a read performance of 270 Mb/s.

But your hardware, a single SATA disk can only do around 100Mb/s. Therefore your benchmark setup is measuring cache and not real disk performance.

Inside a KVM guest you loose some performance, you can never get higher rates inside a guest, only if you do caching somewhere - you understand what I mean?

Here is one result on from a small testbox. My host (HP Microserver (very slow cpu) with Adaptec 6805, 2 x 2TB WD RE4 sata, raid0)

Code:
pveperf
CPU BOGOMIPS:      5155.78
REGEX/SECOND:      204559
HD SIZE:           94.49 GB (/dev/mapper/pve-root)
BUFFERED READS:    254.34 MB/sec
AVERAGE SEEK TIME: 7.03 ms
FSYNCS/SECOND:     885.92

and as a guest, win2008r2-sp1. virtio disk - see attachment.hdtune-hp-microserver.png

on the host I got around 254 mb read performance, on the guest 170 mb, which is expected - lower in the guest.

all this is done with 2.6.32 (2.6.35 does not support this raid controller yet).
 
1 - Yes, we are benchmarking caches; however, both kernel were benchmarked in caches (caches enabled by default for Proxmox / KVM). So why still a huge disparity in performance; specially WRITE? We could not even complete the CrystalDisk test in 2.6.35-1-pve, but if we reboot and change kernel to 2.6.32-4-pve it was a different / efficient result.

2 - READ is cached optimized / enabled by default in Proxmox 1.8 (KVM), but still, you can not change too much the WRITE performance even with CACHE. There is still a huge discrepancy in WRITE between kernel.

3 - We are using VMDK / IDE setup as we found this faster. Our previous test results shows varying speed differences depending on the type of disk you have chosen. Perhaps we get different results based on the disk type use. Will test.

4 - Our system is using SATA II not plain SATA. Slightly faster with sustained rate of ~140 MB/s.

5 - We've tested all other major hypervisors and posted their results, and based on those, even with cache settings enabled by default by the vendors, it does not generate large discrepancy in WRITE by changes in versions.

6 - The point is, we find there is still a negative side effect on using 2.6.35-1-pve with KVM at least. On the same exact given system (HOST and GUEST), the only change is kernel version and performance drops with the latter kernel.


Briefly tested CACHE=NONE and results were still very similar; again, this is VMDK / IDE. Any other test you recommend (Currently building a QCOW2)?
 
Last edited by a moderator:
I am using 2.6.35 so this interested me.
I use LVM on top of DRBD for storage with cache=none and virtio drivers.
We have Areca 1880i Raid controllers 4GB cache with Battery, 12 disks in raid 6
DRBD replicates over two bonded 1Gb Ethernet ports which is a bottleneck on writes.

I benched one virtual machine and got 826.7MB/sec Read and 142.5MB/sec Write on the Sequential test.
I want to live migrate the VM to the other host, change kernel to 2.6.32 and live migrate back and re-run the benchmark.
Anyone know if it is ok to live migrate KVM between 2.6.35 and 2.6.32?
 
Ok, so disk type is VirtIO. What image format are you using e100?
 
Thanks e100. Somehow missed that detail of your setup. For our test, we are only using the default LOCAL LVM (dev/pve/data) setup with an assortment of File Disk Types and Format.

We've narrowed the performance issue to a combination of image format and kernel version. VMDK with 2.6.32-4-pve has negative outcome; CACHE or NO CACHE.

http://accends.files.wordpress.com/2011/08/proxmoxkernelanddisktypes.gif
ProxmoxKernelandDiskTypes.gif

This will be our only and final test - VMDK and QCOW2 for the kernel issue suggested. For now, we will stay with original Proxmox 1.8 release kernel due to our own study on performance between disk types - http://accends.files.wordpress.com/2011/08/proxmoxvedisks1.gif

This goes without saying, test, test, test, before you deploy into production.




 
Last edited by a moderator:
Here is one result on from a small testbox. My host (HP Microserver (very slow cpu) with Adaptec 6805, 2 x 2TB WD RE4 sata, raid0)

all this is done with 2.6.32 (2.6.35 does not support this raid controller yet).

kernel and driver for 6805 2.6.35 -> http://forum.proxmox.com/threads/6816-kernel-2.6.35-pve-and-6805

I have work system on this day 2.6.35-2-pve on Adaptec 6805 raid-0 8x1Tb sas hdd and if need testing VM winxp perfomance virtio hdd driver...
 
Last edited by a moderator:
Very Very Bad - results tests kernel-2.6.35-2-pve :(

IDE (qcow2, raw, vmdk)
 
VirtIO (qcow2, raw, vmdk)



resutls:
kernel-2.6.32-x-pve - bag from AMD cpu usage
kernel-2.6.35-x-pve - bag perfomance guest hdd

What kernel to use for AMD server and max perfomance ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!