Poor disk write in KVM Windows guests

micromedic

New Member
Jun 13, 2008
7
0
1
Hi all,

we have made several tests of Proxmox 1.5 on different machines with different hardware.
Regardless of using virtio or ide drivers, disk write performance in KVM Windows guests is really bad compared to native speed.
Measuring native performance with dd direct on raw file and measuring windows performance with hdtund and/or iometer with same block size. Speed in windows guest is only 10% of native speed.
Any advice or clues? Thx in advance.
 
me too...
tried installing windows xp, or even 2003, and installation time was more than 2 hours, with no other workload on the same node...
 
me too...
tried installing windows xp, or even 2003, and installation time was more than 2 hours, with no other workload on the same node...

xp or win2003 installs in about 5 to 10 minutes, means here is something really wrong ...
 
Dear Tom,
right, but our problem is write speed in VM.Any advice?

what storage do you use for your KVM guests?
local storage? qcow2 or raw files? also post your /etc/qemu-server/VMID.conf file!

also give all details about your setup and how do you benchmark so that others can easily reproduce it and can give their figures.
 
Last edited:
Hello Tom,

we´ve tried a lot of different configurations actually. The Local Storage was configured once as ext3 device and of course LVM (with write_cache enabled & disabled) devices, with no affect on the write performance to the disk. We´ve always chosen the raw file format.

We´ve also been trying to setup an drbd device with heartbeat (lvm/ext3) ... it worked just fine. We thought, that poor performance could be an old drbd version or kernel of pve, so anything updated & upgraded to the newest version.

For the performance tests on native machine, we have used dd with "offlag=direct". dd was used on :

- block device (in our case /sda3) > write speed about 250 MB/s
- drbd device > about 240 MB/s
- raw files > write same as drbd

Read speed have been always larger than 300 MB/s.

Inside the VM the read and write speed was performed with hdtune (http://www.hdtune.com/) and iometer with the same result. For that testing we´ve added two types of hdd emulation to the VM (mainly Windows 2008 Server x64) ide and virtio:

- IDE Write Speed > 20 - 40 MB/s
- IDE Read Speed > 200 - 220 MB/s
- Virtio Write Speed > 15 - 25 MB/s
- Virtio Read Speed > same as IDE

All write speed tests were very fuzzy, with many positive and negative peaks.

But, you do not really need the performance meters as a simple copy process within the VM takes a really long time.

To exclude an hardware problem,citrix xenserver 5.5U1 was installed:
constant write speed about 110 - 130 MB/s

Perhaps this information could give you an possibility to give us an advice on that.

As for the VMID.conf file, hopefully you do not need it at once, because we´ve just installed other distributions helping us solving that case.

thx

patryk
 
...
For the performance tests on native machine, we have used dd with "offlag=direct". dd was used on :

- block device (in our case /sda3) > write speed about 250 MB/s
- drbd device > about 240 MB/s
- raw files > write same as drbd

Read speed have been always larger than 300 MB/s.
Hi,
how you get a write-speed with 240 MB/s on a drbd-device?
I have on a fast raid a drbd-device which is connectet with a gigabit-line to the second node (with primary/primary like discribed in the proxmox-wiki) and i got something like 30MB/s - i though thats normal, because as sync-speed i select 38MB/s.

Udo
 
I assume you did not test 1.5 with 2.6.18 kernel?
 
Hi udo,

the drbd device wasn´t synched (standalone), as we tested it ... we didn´t want to affect the performance, although there´s an 10 Gbit Cross Cable Link between these machines.

greetz
patryk
 
The command is:
dd if=/dev/zero bs=1G count=10 of=/dev/sdb oflag=direct
We also did some tests with different blocksizes.
 
The current KVM code has too much CPU overhead - so the CPU is the limiting factor if you test with small block sizes (CPU usage will be 100%).
The following commands are executed inside a KVM VM.

A small block size gives very poor performance:

Code:
# dd if=/dev/zero of=test.img oflag=direct bs=1K count=1000000
1000000+0 records in
1000000+0 records out
1024000000 bytes (1.0 GB) copied, 386.291 seconds, 2.7 MB/s

But when I run with a block size of 10M:
Code:
dd if=/dev/zero of=test.img oflag=direct bs=10M count=100
100+0 records in
100+0 records out
1048576000 bytes (1.0 GB) copied, 15.63 seconds, 67.1 MB/s

which is native speed.
 
Hello Dietmar & Tom,

what about that: We would give the you an access to that server via ssh and https, and you could examine it for yourself?

best regards
patryk
 
Hi udo,

the drbd device wasn´t synched (standalone), as we tested it ... we didn´t want to affect the performance, although there´s an 10 Gbit Cross Cable Link between these machines.

greetz
patryk

Hi Patryk,
what performance (write/read) you are reach with the 10Gbit-connection when the drbd is running on both nodes (primary/primary)?
What kind of 10GB-Cards do you use?

Udo
 
before using proxmox i was using vmware server, and with vmware server there is a very noticeable delay (the server was almost unusable)
with proxmox i get a decent disk write speed
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!