unixbench result on proxmox is good or not

sicute

New Member
Oct 25, 2010
10
0
1
www.sharenupload.com
hi ...
i try using unixbench to test my proxmox .
my engine is
ibm x server X3250 M2 with 1 G + 1 TB hardisk , dual GB lan, 2 x Intel(R) Core(TM)2 Duo CPU E7400 @ 2.80GHz.
here my pveversion :
cendol:~# pveversion -v
pve-manager: 1.6-5 (pve-manager/1.6/5261)
running kernel: 2.6.32-4-pve
proxmox-ve-2.6.32: 1.6-25
pve-kernel-2.6.32-4-pve: 2.6.32-25
qemu-server: 1.1-22
pve-firmware: 1.0-9
libpve-storage-perl: 1.0-14
vncterm: 0.9-2
vzctl: 3.0.24-1pve4
vzdump: 1.2-8
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.12.5-2
ksm-control-daemon: 1.0-4
cendol:~#
here my unixbench result running on one node on proxmox:

========================================================================
BYTE UNIX Benchmarks (Version 5.1.2)

System: vp01: GNU/Linux
OS: GNU/Linux -- 2.6.35-22-generic-pae -- #35-Ubuntu SMP Sat Oct 16 22:16:51 UTC 2010
Machine: i686 (unknown)
Language: en_US.utf8 (charmap="UTF-8", collate="UTF-8")
CPU 0: QEMU Virtual CPU version 0.12.5 (5601.2 bogomips)
x86-64, MMX, Physical Address Ext, SYSCALL/SYSRET
15:52:13 up 38 min, 1 user, load average: 0.13, 0.09, 0.03; runlevel 2

------------------------------------------------------------------------
Benchmark Run: Tue Nov 23 2010 15:52:13 - 16:20:20
1 CPU in system; running 1 parallel copy of tests

Dhrystone 2 using register variables 14542621.4 lps (10.0 s, 7 samples)
Double-Precision Whetstone 2737.9 MWIPS (10.1 s, 7 samples)
Execl Throughput 1154.8 lps (30.0 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks 532966.7 KBps (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks 172564.0 KBps (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks 1087802.0 KBps (30.0 s, 2 samples)
Pipe Throughput 962475.2 lps (10.0 s, 7 samples)
Pipe-based Context Switching 113284.7 lps (10.0 s, 7 samples)
Process Creation 2388.7 lps (30.0 s, 2 samples)
Shell Scripts (1 concurrent) 2260.3 lpm (60.0 s, 2 samples)
Shell Scripts (8 concurrent) 290.8 lpm (60.1 s, 2 samples)
System Call Overhead 988280.7 lps (10.0 s, 7 samples)

System Benchmarks Index Values BASELINE RESULT INDEX
Dhrystone 2 using register variables 116700.0 14542621.4 1246.2
Double-Precision Whetstone 55.0 2737.9 497.8
Execl Throughput 43.0 1154.8 268.6
File Copy 1024 bufsize 2000 maxblocks 3960.0 532966.7 1345.9
File Copy 256 bufsize 500 maxblocks 1655.0 172564.0 1042.7
File Copy 4096 bufsize 8000 maxblocks 5800.0 1087802.0 1875.5
Pipe Throughput 12440.0 962475.2 773.7
Pipe-based Context Switching 4000.0 113284.7 283.2
Process Creation 126.0 2388.7 189.6
Shell Scripts (1 concurrent) 42.4 2260.3 533.1
Shell Scripts (8 concurrent) 6.0 290.8 484.7
System Call Overhead 15000.0 988280.7 658.9
========
System Benchmarks Index Score 618.0

root@vp01:~/unixbench-5.1.2#
also i made test i/o

root@vp01:~/unixbench-5.1.2# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 18.0992 s, 59.3 MB/s
root@vp01:~/unixbench-5.1.2#
so is is good or not the perfomance proxmox on my ibm server ? :confused:
 
also post the result of pveperf.
 
hi ...
i try using unixbench to test my proxmox .
my engine is
here my pveversion :
here my unixbench result running on one node on proxmox:

also i made test i/o

so is is good or not the perfomance proxmox on my ibm server ? :confused:

Hi,
about the io: with only one disk you can't expect good values!
If i do the same dd inside of a kvm-guest, i got this values (on a FC-Raid):
Code:
# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 4.6237 s, 232 MB/s
So, you have space for optimizing ;)

A good raidcontroller with fast disks and raid-level 10 are the best.

I don't know unixbench - is there a debian package?

Udo
 
wow that is great. how to get good I/O like that, i don't use raid because only 1 storage and testing only
Hi,
like i wrote before: good raid controller (i use areca), good disks (e.g. 4 * hitachi sas-disk) and raid-10. This isn't cheap, but selfmade not so expensive like a buyed system with less performance.
And look first for good io of the host - and then inside the vm. For optimizing io inside a kvm-guest you need also cpu-power and the right driver (virtio).
Like Tom wrote: use pveperf (e.g. pveperf /var/lib/vz) as reference point.

Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!