Harddisk Performance Problems with updated 1.7 / 1.8 version?

Torra

New Member
Jun 17, 2011
10
0
1
Stuttgart, Germany
hi,

we have some perfomance problems with the updated proxmox 1.7 and the 1.8 version.
We tested the same VMs ( backup & restore of a Win7 64bit SP1 and a SmallBusinessServer 2011 -VM)
on a proxmox with an updated version 1.7 & fresh installed 1.8 (original cd-version) and the disk-speed (read + write) of the vms on their harddisk was terrible.
In 1.7 kernel 2.6.32-28 without any updates it is quite okay, but since we've updated the 1.7 version or upgraded it to 1.8 it's terrible.
It's also strange, that the VM with Win7 in one cenario writes faster than it reads!


The VMs:

SBS 2011 VM:
8GB Ram
4 CPU Sockets
1 Core/Socket
1 80GB Virtio HDD with Virtio 1.1.16-drivers

Win7-64bit-SP1 VM:
4GB Ram
4 CPU Sockets
1 Core/Socket
1 64GB IDE HDD

Both guests are set up as "windows 2008"-guests.


Our system:
IBM x3650 M3:
2x Xeon E5620 CPU
24GB Ram
4x 500GB SAS-HDD in a LSI M5014-RAID5-array


Performance values (all done at least 5 times):


version 1.7 installed from cd - kernel 2.6.32-28:

SBS 2011-VM:
read: 350-400 MB/s
write: 60-75 MB/s

Win7-64bit-SP1 VM:
read: 110-140 MB/s
write: 60 - 75MB/s


version 1.7 kernel 2.6.32-33 and 1.8 installed from cd current kernel

SBS 2011-VM:
read: 50-80 MB/s
write: 20-40 MB/s

Win7-64bit-SP1 VM:
read: 40-70 MB/s
write: 50-90 MB/s



a downgrade to the 2.6.18-kernel (recommended in an other thread) made even more problems like a total network failure.

Can anyone help us with that problem?
 
we changed the default cache setting from cache=writethrough to cache=none. see release notes of 1.8. this explains the difference here (your benchmark seem to use cache)
if you still want cache=writethrough just add this in your /etc/qemu-server/VMID.conf file to each disk.

- use the latest stable version (run apt-get update/upgrade)
- do not use raid5 if you want best read/write performance (raid10 is recommended)
- post the results of 'pveperf'
- where are the disk image files? locally stored under /var/lib/vz?
- how do you benchmark inside windows?
 
we changed the default cache setting from cache=writethrough to cache=none. see release notes of 1.8. this explains the difference here (your benchmark seem to use cache)
if you still want cache=writethrough just add this in your /etc/qemu-server/VMID.conf file to each disk.
we've tried this last week.without success. the values are in the same range as in the 1.8 version.

- use the latest stable version (run apt-get update/upgrade)
we've tried the latest stable version = updated 1.8 version

- do not use raid5 if you want best read/write performance (raid10 is recommended)
this doesn't change the situation: in version 1.7 the speed/performance of the raid5 is okay.
additonally, in a raid10 are only 4 hdds possible and we want to add some hdds in the near future..

- post the results of 'pveperf'

CPU BOGOMIPS: 38401.74
REGEX/SECOND: 946938
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 263.58 MB/sec
AVERAGE SEEK TIME: 7.00 ms
FSYNCS/SECOND: 2396.95
DNS EXT: 134.36 ms
DNS INT: 63.42 ms

the speed of the host is ok.

- where are the disk image files? locally stored under /var/lib/vz?
yes. nothing changed.

- how do you benchmark inside windows?

we have a program named "PassMark PerformanceTest" :
http://www.passmark.com/products/pt.htm
 
so you got identical values by using cache=writethrough? just to be sure, you need to poweroff the VM to apply such a change in the setting.
 
oh sorry.. i rebooted the server and proxmox only suspend the VMs..
the VMs are rebooted now.

the new values:

SBS 2011-VM:
read: ~76MB/s
write: ~93 MB/s

Win7-64bit-SP1 VM:
read: ~ 52 MB/s
write: ~59 MB/s


this is a little bit better than without that option - but in contrast to the 140 MB/s in 1.7 it is much slower.
 
can you do some test with hdtune (there is a trial edition)? just add another disk, e.g. 10 GB and to test on this. you do not need to add a partition. (hdtune write test destroys the data on the disk, therefore you need an extra disk)

and post the output of 'pveversion -v'
 
can you do some test with hdtune (there is a trial edition)? just add another disk, e.g. 10 GB and to test on this. you do not need to add a partition. (hdtune write test destroys the data on the disk, therefore you need an extra disk)

and post the output of 'pveversion -v'

pveversion -v:

pve-manager: 1.8-18 (pve-manager/1.8/6070)
running kernel: 2.6.35-1-pve
proxmox-ve-2.6.35: 1.8-11
pve-kernel-2.6.32-4-pve: 2.6.32-33
pve-kernel-2.6.35-1-pve: 2.6.35-11
qemu-server: 1.1-30
pve-firmware: 1.0-11
libpve-storage-perl: 1.0-17
vncterm: 0.9-2
vzctl: 3.0.27-1pve1
vzdump: 1.2-13
vzprocps: 2.0.11-2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.14.1-1
ksm-control-daemon: 1.0-6


results of the erasetest with hdtune:

sbs 2011-VM: 86,8 MB

Win7-VM: 40,2 MB
 
sbs2011 and win7 uses the same basis, so I do not get why you have such a difference - looks you are doing some thing different on these two VMs.
 
the sbs has a virtio-hdd and the win7 has a IDE-hdd.
but that isn't the whole reason, cause both systems are fresh-installed..
the performance values on the sbs are additonally slower ( only 5-10 MB/s writing-speed)
when the VMs are working at the same time (recently posted values are measured separatly..)


we are trying the raid10-config at the moment...
 
I've read the Roadmap for Proxmox VE 2.x and there is a feature called "IO limits for VM´s ".
could it be, that this feature is already integrated in version 1.8 (but doesn't work correctly) ?
 
no.
 
hi

we've tested the whole week (also on a freshinstalled IDE- and a freshinstalled Viritio-Windows 7 VMs ) and we've got a new possible reason:

could the RAM-manager ( it distributes the ram for each single process / VM) be the problem?

our results:

vm with 4gb RAM:
only ~650mb are used
write speed: ~90MB/s
read speed : ~25 MB/s

VM with 10GB RAM:
about the same values as in 4gb


afterwards we used a tool named "memtest" (Link) to use additional RAM

for example at the 4GB VM:
650MB RAM are used of windows and additionally 3GB used of memtest = 3722MB of 4096MB are used.
while using the tool better having performance values around 150MB/s for write and for read speed !!...
same effect on the 10GB VM !

then we've tested the VMs also with only 2GB RAM and without the tool.
the performance values are around 150MB/s for write and for read speed !!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!