pve (host) vs ubuntu performance

S

shutyaev

Guest
Hi all!

I've experienced something I can't explain. I've ran some performance tests on pve (host) (latest version as of yesterday) that has just been installed and has no vms. Then I've compared these results with ones obtained from ubuntu 12.04.2 installed on the same hardware. There are some significant differences. Can anyone explain them? The utility used to run these tests is phoronix-test-suite.

Test results

TestParameterspveubuntu
pts/hdparm-read(disk read speed)
373.63 MB/s368.64 MB/s
pts/iozone(disk write speed)
Record Size: 4Kb – File Size: 2Gb – Disk Test: Write Performance181.03 MB/s310.74 MB/s
pts/java-scimark2(cpu speed)
Computational Test: Composite528.08 Mflops1227.99 Mflops
Computational Test: Fast Fourier Transform327.53 Mflops772.73 Mflops
Computational Test: Jacobi Successive Over-Relaxation502.43 Mflops1173.10 Mflops
Computational Test: Monte Carlo258.98 Mflops602.60 Mflops
Computational Test: Sparse Matrix Multiply550.20 Mflops1284.71 Mflops
Computational Test: Dense LU Matrix Factorization1001.23 Mflops2306.83 Mflops
pts/stream(ram speed)
Type: Copy65493.22 MB/s84897.37 MB/s
Type: Scale55851.08 MB/s69430.74 MB/s
Type: Add46489.54 MB/s57197.37 MB/s
Type: Triad44076.67 MB/s54838.75 MB/s

Hardware configuration

ComponentModel
Processor2 x S2011 Intel Xeon E5-2560 (2.0 GHz, 20Mb, 8.0 GT/s, 8 Cores)
RAM12 x DDR-III 16Gb 1600MHz PC-12800 Kingston ECC Reg (KVR16R11D4/16)
PlatformIntel R2312GL4GS (2U, Grizzly Pass)
StorageRAID10 consisting of

  • Controller: LSI 9260-16i SGL (LSI00208)
  • Battery: LSI Logic LSliBBU07 Battery Backup Unit for 8880EM2, 9260-xx and 9280-xx
  • Drives: 4 x 600GB SAS Hitachi Ultrastar 15K600 (HUS156060VLS600, 15000rpm, 64Mb)
Power supplyIntel FXX750PCRPS 750W Common Redundant Power Supply
 
Hi all!

I've experienced something I can't explain. I've ran some performance tests on pve (host) (latest version as of yesterday) that has just been installed and has no vms. Then I've compared these results with ones obtained from ubuntu 12.04.2 installed on the same hardware. There are some significant differences. Can anyone explain them? The utility used to run these tests is phoronix-test-suite.

Test results

TestParameterspveubuntu
pts/hdparm-read(disk read speed)
373.63 MB/s368.64 MB/s
pts/iozone(disk write speed)
Record Size: 4Kb – File Size: 2Gb – Disk Test: Write Performance181.03 MB/s310.74 MB/s
pts/java-scimark2(cpu speed)
Computational Test: Composite528.08 Mflops1227.99 Mflops
Computational Test: Fast Fourier Transform327.53 Mflops772.73 Mflops
Computational Test: Jacobi Successive Over-Relaxation502.43 Mflops1173.10 Mflops
Computational Test: Monte Carlo258.98 Mflops602.60 Mflops
Computational Test: Sparse Matrix Multiply550.20 Mflops1284.71 Mflops
Computational Test: Dense LU Matrix Factorization1001.23 Mflops2306.83 Mflops
pts/stream(ram speed)
Type: Copy65493.22 MB/s84897.37 MB/s
Type: Scale55851.08 MB/s69430.74 MB/s
Type: Add46489.54 MB/s57197.37 MB/s
Type: Triad44076.67 MB/s54838.75 MB/s

Hardware configuration

ComponentModel
Processor2 x S2011 Intel Xeon E5-2560 (2.0 GHz, 20Mb, 8.0 GT/s, 8 Cores)
RAM12 x DDR-III 16Gb 1600MHz PC-12800 Kingston ECC Reg (KVR16R11D4/16)
PlatformIntel R2312GL4GS (2U, Grizzly Pass)
StorageRAID10 consisting of

  • Controller: LSI 9260-16i SGL (LSI00208)
  • Battery: LSI Logic LSliBBU07 Battery Backup Unit for 8880EM2, 9260-xx and 9280-xx
  • Drives: 4 x 600GB SAS Hitachi Ultrastar 15K600 (HUS156060VLS600, 15000rpm, 64Mb)
Power supplyIntel FXX750PCRPS 750W Common Redundant Power Supply


for pts/iozone, maybe it's because of deadline scheduler ?

for pts/java-scimark2 cpu test, this is strange .... Does the java version is the same ?
 
UPDATE - more operating systems tested

It looks that the problem is more connected to hardware installed and Proxmox core.

We have done performance tests on installations of Proxmox 2.3, Proxmox 3.0, Ubuntu 12.04 and Debian 7.1.

We used pts/java-scimark2 Computational Test: Composite

With Proxmox 2.3 and 3.0 perfomance test result is about 520 Mflops.
With Ubuntu 12.04 and Debian 7.1 perfomance test result is about 1200 Mflops.

We also tested other servers based on Intel i7 and there is no difference in performance between Proxmox, Ubuntu, and Debian. Results are totally identical wich makes us think that it is some compatibility issue.

So to answer the previous post: java version is the same


Can it be some problem due working with two logical cores of Intel Xeon E5-2650 CPU in Proxmox?

What else can it be? What is the difference in the linux core between Proxmox 3.0 and Debian 7.1?
 
I think the biggest difference is to be blamed on the kernel.

Proxmox 2.3 and 3.0 uses and old RHEL 6.3 and 6.4 kernel 2.6.32 with backported drivers
Ubuntu 12.04 and Debian 7.1 both uses a kernel 3.2. Even though the proxmox kernel contains backported drivers there is a huge difference in the compatibility towards recent chipsets. Especially Sandy Bridge is not performing at its best on 2.6.32.

Apart from that virtualization and Hyper Threading was formerly a bad combination which has improved much with Sandy Bridge and this chipsets new numa architecture. But I guess this improvement is not fully available in 2.6.32. I would therefore suggest that you run the tests once more but this time you should disable Hyper Threading.
 
Without knowing the exact configuration of the kvm command line involved and the kvm version, your numbers aren't really interpretable nor repeatable. Try posting the command line used in both systems, so that the PVE devs could look at it and derive something that is more meaningful. As others mentioned, kernel scheduler is also important (among other things).
 
I would therefore suggest that you run the tests once more but this time you should disable Hyper Threading.

I performed test in both mode - with disabled and enabled HT. The results was all the same: Proxmox uses only one of logical cores in physical core. And test results was all the same too.

Without knowing the exact configuration of the kvm command line involved and the kvm version

We have not starting any VMs. All tests performed at host. In case of Ubuntu and Debian it is a clear installation, instead of Proxmox.
 
Just to verify the kernel could you then try installing CentOS 6.4 and run the tests again?
CentOS4 uses same kernel version as Proxmox and Redhat 6.4
 
We just tested Centos 6.4
And seems that Centos normally use both logical cores - speed is about 1200 Mflops

OS: Centos 6.4, Kernel is 2.6.32-358.el6.x86_64

When looking at Proxmox 3.0 we see it uses older kernel - pve-kernel-2.6.32-22-pve. Is it possible that the issue was fixed in newer kernel that CentOs uses?

Is it possible to upgrade Proxmox to kernel 2.6.32-358.el6.x86_64 and test?
 
"pve-kernel-2.6.32-22-pve" based on OpenVZ's kernel "vzkernel-2.6.32-042stab078.28.src.rpm", which is based on RHEL kernel 2.6.32-358.6.2.el6.
PVE kernel =~ RHEL kernel + OpenVZ patch + update drivers

For those who use KVM only, is it possible ProxmoxVE provide choice between RHEL kernel and the PVE kernel ?
 
Last edited:
did you run the test with latest stable Proxmox VE kernel 2.6.32-22-pve_2.6.32-107?
 
did you run the test with latest stable Proxmox VE kernel 2.6.32-22-pve_2.6.32-107?

Yes, I run tests with latest kernel:

running kernel: 2.6.32-22-pve
proxmox-ve-2.6.32: 3.0-107
pve-kernel-2.6.32-22-pve: 2.6.32-107
 
Any ideas?

By the way I tried to use 2.6.32-19-pve and 2.6.32-23-pve on clear Debian install and results was all the same.
 
I can report that we also have the same performance discrepancies between Proxmox 3.0 and Ubuntu 12.04
(we originally benchmarked the same VMs , and the memory and CPU score were lower with proxmox kernel)

I assumed it was due to the kernel 2.6.x

Our hardware is similar : a dell R720 with 2x Xeon E5-2670, and 12x DDR-III 16Gb 1600MHz ECC RAM
 
since CentOS 6.4 performs as expected it is fair to assume that the performance degregation must be caused by the OpenVZ patches applied to the Redhat 6.4 kernel source tree.
 
If I use KVM only ,How can I disable OpenVZ in the PVE kernel ?
Is it possiable ?
 
So can I do something to increase Proxmox perfomance on Xeon 2600 series?
It is one of top CPUs and it is very pity that Proxmox kernel cannot work with logical cores properly...
 
you could install the Debian Wheezy default 3.2 kernel or compile your own up to 3.10 (3.9 is supposed to introduce some good SSD improvements). Of course in the process you loose all OpenVZ capability and all Support from the kindly Dev's from the Proxmox team :D
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!