Performance so so

vkeven

Active Member
Dec 22, 2009
32
6
26
Ml350 G6 , Xeon 5506 , 4GB , 146GB SAS 15K RAID1 with RAID Card(P410) 512MB BBU , write cache enabled , FSYNC around 1450 and troughput

Native Speed with Bonnie ++ 2G:512K give me almost 600MB/s

Debian Lenny 32bit KVM with IDE drive give me 117MB/s and various test with different size look similar , 1/6 of native performance :(
 
pls also post the details about your software setup (pveversion -v and the config file of the KVM guest- /etc/qemu-server/VMID.conf)

also try with with virtio disks.
 
Ml350 G6 , Xeon 5506 , 4GB , 146GB SAS 15K RAID1 with RAID Card(P410) 512MB BBU , write cache enabled , FSYNC around 1450 and troughput

Native Speed with Bonnie ++ 2G:512K give me almost 600MB/s

Debian Lenny 32bit KVM with IDE drive give me 117MB/s and various test with different size look similar , 1/6 of native performance :(

Hi,
you should use min. doubled size of ram as diskfile with bonnie: mean 8G:512k.
I think you test also ram-speed ;)
600MB/s is very high - how many disks you have in which raid-level?
What is the output of
Code:
pveperf /var/lib/vz

Udo
 
Hi,
you should use min. doubled size of ram as diskfile with bonnie: mean 8G:512k.
I think you test also ram-speed ;)
600MB/s is very high - how many disks you have in which raid-level?
What is the output of
Code:
pveperf /var/lib/vz

Udo

4G:512K give me 175Mb/s native and 25MB/s KVM ( still with IDE drive )
 
pve-manager: 1.5-5 (pve-manager/1.5/4627)
running kernel: 2.6.18-2-pve
proxmox-ve-2.6.18: 1.5-5
pve-kernel-2.6.18-2-pve: 2.6.18-5
qemu-server: 1.1-11
pve-firmware: 1.0-3
libpve-storage-perl: 1.0-8
vncterm: 0.9-2
vzctl: 3.0.23-1pve8
vzdump: 1.2-5
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm-2.6.18: 0.9.1-5
 
serveur:~# pveperf /var/lib/vz
CPU BOGOMIPS: 17067.36
REGEX/SECOND: 688970
HD SIZE: 92.72 GB (/dev/mapper/pve-data)
BUFFERED READS: 132.39 MB/sec
AVERAGE SEEK TIME: 4.68 ms
FSYNCS/SECOND: 2854.42
DNS EXT: 42.90 ms
DNS INT: 77.87 ms (kapta.ca)

name: Debian_Lenny
vlan0: e1000=42:06:F5:8A:A3:CE
bootdisk: ide0
ostype: l26
memory: 1024
onboot: 1
sockets: 1
boot: cad
freeze: 0
cpuunits: 1000
acpi: 1
kvm: 1
cores: 1
virtio1: lvm_data:vm-101-disk-1 ( /home/archive ) 250GB SATA 7.2K RAID1
ide0: local:101/vm-101-disk-2.raw ( / ) can't boot lenny on virtio disk
virtio0: local:101/vm-101-disk-1.raw ( /home ) 146GB SAS 15K RAID 1
ide1: local:iso/debian-504-i386-netinst.iso,media=cdrom

I benched every 3 volume and the result are simply bad

bonnie++ -s 2G:512k -q -d /tmp -u root ( ide volume )
Version 1.03d ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size:chnk K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
servlinux2 2G:512k 39822 97 46923 9 74319 12 39977 97 354329 52 720.8 23

bonnie++ -s 2G:512k -q -d /home -u root ( raw file virtio )
Version 1.03d ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size:chnk K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
servlinux2 2G:512k 26531 71 25524 7 82715 20 32316 97 399141 53 1074 74

bonnie++ -s 2G:512k -q -d /home/archive -u root ( lvm direct SATA )
Version 1.03d ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size:chnk K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
servlinux2 2G:512k 34054 95 45525 9 57998 9 32899 98 374823 67 1034 74

so 132MB/s divised by 6 = 22MB/s
 
So its the way it is ? I tried different VM settings with various RAM/CPU's/CORE's and 30 to 40 MB/s is the maximum I could get from VM Machine , everywhere I read than KVM is WAY better than vmware for speed , someone here really reached 80% of native performance with KVM or its another dream
 
So its the way it is ? I tried different VM settings with various RAM/CPU's/CORE's and 30 to 40 MB/s is the maximum I could get from VM Machine , everywhere I read than KVM is WAY better than vmware for speed , someone here really reached 80% of native performance with KVM or its another dream

Hi,
i made some tests and can't reproduce your issue. Of course, kvm eat performance, but not so much.
Here my results.

On a kvm-vm with one core, ubuntu and a 50GB virtio-disk:
Code:
# bonnie++ -u root -f -n 0 -r 8000 -d testdir                         
Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
bonnie-virtio  16G           149804  27 49167  11           182380  19  1441   9
bonnie-virtio,16G,,,149804,27,49167,11,,,182380,19,1441.3,9,,,,,,,,,,,,,

# echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/zero of=bigfile bs=1024k count=8192 conv=fdatasync
8192+0 Datensätze ein
8192+0 Datensätze aus
8589934592 Bytes (8,6 GB) kopiert, 153,525 s, 56,0 MB/s

# echo 3 > /proc/sys/vm/drop_caches # also on the host!
# dd if=bigfile of=/dev/null bs=1024k
8192+0 Datensätze ein
8192+0 Datensätze aus
8589934592 Bytes (8,6 GB) kopiert, 47,7465 s, 180 MB/s

On the host:
Code:
proxmox1:~# pveperf /var/lib/vz
CPU BOGOMIPS:      27293.44
REGEX/SECOND:      1070643
HD SIZE:           543.34 GB (/dev/mapper/pve-data)
BUFFERED READS:    442.47 MB/sec
AVERAGE SEEK TIME: 5.68 ms
FSYNCS/SECOND:     5131.07
DNS EXT:           104.81 ms
DNS INT:           0.58 ms

proxmox1:/var/lib/vz# bonnie++ -u root -f -n 0 -r 8000 -d testdir
Version 1.03d       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
proxmox1        16G           240423  28 121809  11           501508  20 855.3   0
proxmox1,16G,,,240423,28,121809,11,,,501508,20,855.3,0,,,,,,,,,,,,,

240MB/s without virtualisation and round 150MB/s with virtualisation - i think it's ok.
OK, normaly i must use 32GB testfile for this host - but the time....

Udo
 
could you post your vmid.conf , raw files or lvm group ?

Of course!
I use raw-files to test the local storage.
Code:
# more /etc/qemu-server/119.conf 
name: bonnie-virtio
sockets: 1
vlan4: virtio=8A:B2:BF:39:03:89
ostype: l26
memory: 1500
virtio0: local:119/vm-119-disk-1.raw
boot: c
freeze: 0
cpuunits: 1000
acpi: 1
kvm: 1
ide2: none,media=cdrom
bootdisk: virtio0
onboot: 0
cores: 1

Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!