Phoronix Benchmark Results (build-linux-kernel)

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
754
1,742
223
Here are some benchmark results with latest Proxmox VE 1.9.

Hardware: Intel Modular Server, 2 x Intel Xeon E5540 @ 2.53GHz (HT disabled in bios)
OS: Debian Squeeze in container and KVM (Lenny on host)
Benchmark: Phoronix-test-suite (build-linux-kernel) V 3.4.0

Proxmox VE Kernel: pve-kernel-2.6.32-6-pve: 2.6.32-47

> phoronix-test-suite benchmark build-linux-kernel

Results (lower is better):

[TABLE="class: grid"]
[TR]
[TD]Proxmox VE Host (2 Sockets, 4 Cores = 8 CPU´s):[/TD]
[TD]244 Seconds[/TD]
[/TR]
[TR]
[TD]KVM Guest (2 Sockets, 4 Cores = 8 CPU´s)[/TD]
[TD]270 Seconds[/TD]
[/TR]
[TR]
[TD]OpenVZ Container (CPU=8)[/TD]
[TD]277 Seconds[/TD]
[/TR]
[TR]
[TD][/TD]
[TD][/TD]
[/TR]
[TR]
[TD]KVM Guest (1 Sockets, 1 Cores = 1 CPU)[/TD]
[TD]1901 Seconds[/TD]
[/TR]
[TR]
[TD]OpenVZ Container (CPU=1)[/TD]
[TD]2024 Seconds[/TD]
[/TR]
[TR]
[TD][/TD]
[TD][/TD]
[/TR]
[/TABLE]


Conclusion:

Both virtualization technologies performs great and quite similar concerning this cpu intense task. Also the assignment of CPU works well on OpenVZ and KVM, as expected. But yes, KVM is a bit faster, but this in not really significant.

Martin
 
Here are some results with the old kernel (2.6.32-4).

[TABLE="class: grid"]
[TR]
[TD]Proxmox VE Host (2 Sockets, 4 Cores = 8 CPU´s):[/TD]
[TD] 251 Seconds[/TD]
[/TR]
[TR]
[TD]OpenVZ Container (using all CPU´s)[/TD]
[TD] 287 Seconds[/TD]
[/TR]
[/TABLE]

2.6.32-4 is slightly slower than 2.6.32-6.

Martin
 
Did some test with HT enabled, even better (still with 2.6.32-6-47 on the IMS)

[TABLE="class: grid"]
[TR]
[TD]Proxmox VE Host (2 Sockets, 4 Cores, HT = 16 CPU´s)[/TD]
[TD]193 Seconds[/TD]
[/TR]
[TR]
[TD]KVM Guest (2 Sockets, 8 Cores = 16 CPU´s)[/TD]
[TD]229 Seconds[/TD]
[/TR]
[TR]
[TD]OpenVZ Container (CPU=16) [/TD]
[TD]224 Seconds[/TD]
[/TR]
[/TABLE]


Martin
 
I just run this test with a third-party virtualization solutions, got also acceptable results, 300 seconds.
 
thx for these interesting numbers; here are some from our test pilot:

Node 1:
host: 2 x L5520 @ 2.27GHz (2 sockets, 4 cores, 16 threads (HT)), X8DTN, Adaptec 5805 (SAS 15k/RAID-1)
kernel: 2.6.32-6-47

Proxmox VE 1.9, Host: 219 seconds

Node 2:
- same hardware -

kernel: 2.6.32-5-36

Proxmox VE 1.9, Host: 225 seconds
 
... and also with another one (based on xen) - 304 seconds.
 
I do not fully understand what this test measures but DOES this mean that with kernel 2.6.32-47 the KVM virtualization in general is now faster than OpenVZ virtualization?

It was my understanding that OpenVZ was much faster before it was software virtualization. And based on previous tests OpenVZ would indeed be perform faster than others like Xen, KVM etc… (for those who only care about linux virtualization).

Can someone please clarify this for me?

Thanks.
 
what is unclear? this very cpu intense, very simple test shows that KVM and OpenVZ are very fast.nothing more and nothing less.

and yes, KVM is getting better and better, but OpenVZ is still fine and in some situation quite useful.
 
Maybe this test does not show the full picture..
I just don't see logically how KVM would be faster than OpenVZ when you take into account the overall server performance.
 
OpenVZ is still significantly faster than KVM in disk and network IO, since IO is not virtualized in containers.
Most internet-server workloads are defined more by IO, and less by CPU workloads.

It's also important to know OpenVZ is much lighter on memory resources, since even with many containers, there is only one kernel/driver/system memory space, and only one running at once - while with KVM there could be many different kernels running at the same time (and the system has to switch between them).

So it is possible that a single KVM virtual machine has similar performance to a single OpenVZ container, but when you run many on the same server, performance under OpenVZ containers will be much higher. The benchmarks linked by Tom enforce this picture.
 
Last edited:
yes, I agree. most do just single benchmarks and do not test with real live work loads. OpenVZ is still not that widely used as it should, mainly because its not in the mainline kernel. but running multiple web servers its just unbeatable.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!