Slow write speeds in VM's! Tried 4 different hard drives - all the same.

3flight

New Member
Jan 31, 2011
14
0
1
My setup:
24GB ECC RAM, 2xE5506 on Supermicro X8DTL-iF motherboard.

1. I install clean Debian 5.0.8 net-install to a 4Gb USB stick plugged into this server.
2. Then I install proxmox-ve.
3. I plug in a SATA hard drive and make it a PV, make VG and then create a VM via proxmox webui.
SATA drive write speed on host (tested by dd) is fine - 80-100MB/s.
4. VM settings do not make difference, say 512 ram, 5G IDE hdd on LVM storage.
5. I install Debian 5.0.8 net-install inside this VM (or Windows 2003, or Windows 2008r2).
6. Test dd write (dd if=/dev/zero of=/temp.raw bs=1M count=1000) and get around 20MB/s. (In Windows VM I used HDTune Pro and got similar result). In HDTune I can see speed bumps from 10-15MB/s to full drive speed and back, which makes an average of 20MB/s.

This is really bad, and I can't figure out why does it work so slowly inside VMs if it is fine in host OS. Whatever I try it is always slow which makes my server useless as VM host. If I set up md raid1 on two SATA drives and put LVM on top of this, I get even worse result - around 10MB/s write speed (sometimes same 20MB/s as single drive). Read speed is always fine however.

I've tried "directory" storage instead of LVM, result is the same. Even VM OS installation is slow, so I guess this has nothing to do with drivers inside VM. I'm gonna try installing Debian on hdd instead of USB, but this should not change anything really. I've tried to change SATA mode in BIOS from AHCI to IDE but this did not make any change too.

Drives I've tested are 3x500GB Seagate (RAID edition) and 1x640GB WD (Green edition).

Please help me resolve this issue.
 
Last edited:
strange. Just benchmarked a KVM guest using a .raw file as HDD in IDE mode:

Code:
hdparm -t /dev/hda

/dev/hda:
 Timing buffered disk reads:  234 MB in  3.04 seconds =  77.07 MB/sec

the host has a simple SATA HDD without RAID, so it shouldn't give out more than 90~100 MB/s. Will test that later, hdparm is not installed on the host.
 
Using a Ubuntu 10.10 guest with virtio disk:

root@sphinx:~# hdparm -t /dev/vda

/dev/vda:
Timing buffered disk reads: 154 MB in 3.01 seconds = 51.18 MB/sec
 
Hi,
your values are bad - especialy you use the ram for caching!

Try first following:
create lv, mkfs.ext3 on the lv and mount (e.g. /mnt)
test readspeed:
Code:
pveperf /mnt
test writespeed (without memorycache):
Code:
dd if=/dev/zero of=/mnt/bigfile bs=1024k count=8192 conv=fdatasync
look with iostat (like "iostat -dm 5 sda sdb") if there any other IO (e.g. usb-stick).

I would prefer to test with an pure proxmox-ve installation on a single disk and compare the speed.

BTW. you don't reach real speed without an raidcontroller and some fast disks...

Udo

Edit: iostat: apt-get install sysstat
 
Last edited:
Ok, so I went and tested write performance too, here's what I got:

1) on the host
Code:
# time sh -c "dd if=/dev/zero of=tmp1 bs=8k count=1000000 && sync"
1000000+0 records in
1000000+0 records out
8192000000 bytes (8.2 GB) copied, 124.421 s, 65.8 MB/s

real    2m16.196s
user    0m0.284s
sys     0m15.833s
8192000000/(120+16+15)= 54 MB/s

2) inside a KVM guest:
Code:
# time sh -c "dd if=/dev/zero of=tmp1 bs=8k count=1000000 && sync"


1000000+0 records in
1000000+0 records out
8192000000 bytes (8.2 GB) copied, 789.95 seconds, 10.4 MB/s                                                                                                                                                                                  
                                                                                                                                                                                                                                             
real    14m16.922s                                                                                                                                                                                                                           
user    0m0.390s                                                                                                                                                                                                                             
sys     0m39.450s
8192000000/(840+16+39)= 9 MB/s

So yes, the performance DOES get ugly for me too :mad: Thanks for pointing this out. At least now I know what to expect. In any case KVM performs better than WinXP+vmplayer on the same hardware :)
 
Ok, so I went and tested write performance too, here's what I got:

1) on the host
Code:
# time sh -c "dd if=/dev/zero of=tmp1 bs=8k count=1000000 && sync"
1000000+0 records in
1000000+0 records out
8192000000 bytes (8.2 GB) copied, 124.421 s, 65.8 MB/s

real    2m16.196s
user    0m0.284s
sys     0m15.833s
8192000000/(120+16+15)= 54 MB/s
dd if=/dev/zero of=tmp1 bs=8k count=1000000 conv=fdatasync
should got the same results - with sync you measure also the time for syncing all other buffers (but this will be a small different)
...
So yes, the performance DOES get ugly for me too :mad: Thanks for pointing this out. At least now I know what to expect. In any case KVM performs better than WinXP+vmplayer on the same hardware :)
you can try to use the diskoption "cache=none" in the config-file of the VM.
Which driver do you use? ide/virtio? Perhaps you get an better result with virtio.

Udo
 
the VM uses the IDE driver. For now the performance of this particular VM satisfies me, I just tested it to look if I'll get the same results as the topic starter.

I'll make some tests in the future, and try to find a better performing combination.
 
The machine I tested this on is a pretty old VM running ubuntu 7.04 migrated from vmplayer. It mostly makes disk reads, not writes. I just didn't bother testing other options and copied the hardware settings from vmplayer. For new machines I will consider using virtio since I'm now aware of the performance bottleneck!
 
Hi,
your values are bad
My test results:
[usb -> deb -> pve | wd-green 640Gb-> lv 5Gb -> mount lv to /mnt]
pveperf /mnt
CPU BOGOMIPS: 34132.92
REGEX/SECOND: 787307
HD SIZE: 4.92 GB (/dev/mapper/test-lvol0)
BUFFERED READS: 92.10 MB/sec
AVERAGE SEEK TIME: 9.41 ms
FSYNCS/SECOND: 797.90
DNS EXT: 108.04 ms
DNS INT: 0.78 ms (tmi.local)

dd if=/dev/zero of=/mnt/bigfile bs=1024k count=4000 conv=fdatasync
4000+0 records in
4000+0 records out
4194304000 bytes (4.2 GB) copied, 135.082 s, 31.1 MB/s

iostat during dd (iostat -xm 1):
Code:
Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00  8128.00    0.00   82.00     0.00    41.00  1024.00   141.57 1677.80  12.20 100.00
sda1              0.00  8128.00    0.00   82.00     0.00    41.00  1024.00   141.57 1677.80  12.20 100.00
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb2              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb5              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-0              0.00     0.00    0.00 8192.00     0.00    32.00     8.00 17993.48 2149.62   0.12 100.00

But i've noticed during dd that there was a slowdown in writing - iostat showed wMB/s=1.00 for about 10 seconds. That's why speed is 30Mb/s and not 40+ which it should be. I will try Seagate RAID edition drive now and repeat these tests.
sdb is the usb stick and is barely used. dm-0 is lv I guess.
 
Last edited:
500Gb Seagate drive -> lv 5Gb -> mount lv to /mnt2

pveperf /mnt2
CPU BOGOMIPS: 34132.92
REGEX/SECOND: 791701
HD SIZE: 4.92 GB (/dev/mapper/test2-lvol0)
BUFFERED READS: 107.92 MB/sec
AVERAGE SEEK TIME: 7.02 ms
FSYNCS/SECOND: 554.27
DNS EXT: 97.17 ms
DNS INT: 0.81 ms (tmi.local)

dd if=/dev/zero of=/mnt2/bigfile bs=1024k count=4000 conv=fdatasync
4000+0 records in
4000+0 records out
4194304000 bytes (4.2 GB) copied, 51.0695 s, 82.1 MB/s

iostat during dd (iostat -xm 1):

Code:
Device:         rrqm/s   wrqm/s     r/s      w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00    0.00     0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda1              0.00     0.00    0.00     0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb               0.00     0.00    0.00     0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb1              0.00     0.00    0.00     0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb2              0.00     0.00    0.00     0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb5              0.00     0.00    0.00     0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-0              0.00     0.00    0.00     0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdc               0.00 20302.00    0.00   171.00     0.00    84.00  1006.04   286.18 1185.33   5.85 100.00
sdc1              0.00 20301.00    0.00   171.00     0.00    84.00  1006.04   286.18 1185.33   5.85 100.00
dm-1              0.00     0.00    0.00 20459.00     0.00    79.92     8.00 35881.45 1245.06   0.05 100.00

No slowdowns during write.

Now I'm going to create VMs on each of the drives via proxmox webui, start VMs, install deb and run dd test inside 'em.
 
VM which resides on LVM on WD-green:

dd if=/dev/zero of=/bigfile bs=1024k count=4000 conv=datasync
4000+0 reconrd is
4000+0 records out
4194304000 bytes (4.2 GB) copied, 468.462 s, 9.0 MB/s

iostat on host during dd in guest:
Code:
Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00  2352.00   75.00   77.00     0.29     9.49   131.79     0.94    5.55   6.16  93.60
sdc               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
Speed was unstable, varying from 3MB/s to 23MB/s during dd.

_____________________________________________________________

VM which resides on LVM on Seagate:

dd if=/dev/zero of=/bigfile bs=1024k count=4000 conv=datasync
4000+0 reconrd is
4000+0 records out
4194304000 bytes (4.2 GB) copied, 202.159 s, 20.7 MB/s

iostat on host during dd in guest:
Code:
Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdc               0.00  5244.00  169.00  167.00     0.66    21.01   132.10     0.94    2.80   2.80  94.00
Speed value in iostat was stable during dd.

_____________________________________________________________

Overall - bad performance, compared to host. VM on WD is mostly unusable, VM on Seagate is usable but still inadequately slow.

I will produce 2 more tests:
-give a VM direct access to hdd (via tweaking vmid.conf file) instead of LVM -> test VM write speed
-install proxmox to hdd from ISO distr -> test VM write speed
 
Gave VMs direct access to hdd. Simultaneous deb installation in both VMs, iostat on host:
Code:
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.11    0.00    3.57   27.07    0.00   67.26

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00  2240.00   75.00   70.00     0.29     9.02   131.59     0.96    6.68   6.65  96.40
sdc               0.00  4447.00  148.00  140.00     0.58    17.92   131.53     0.96    3.32   3.33  96.00
Looks like there is no difference - WD is still around 10MB/s and Seagate is around 20MB/s. Creating ext3 partitions on these drives during deb installation takes forever.

Last thing will be removing USB stick and installing ISO'ed proxmox distr on hdd, then same dd write test. Then I'm going to test ESXi performance on same host.

UPD: VMs finally installed, ran dd and yes - there's no difference compared to LVM storage - same 10MB/s for WD and 20MB/s for Seagate.
 
Last edited:
... Then I'm going to test ESXi performance on same host.
Hi,
i had made such test app. one year ago. But on a system with fast raidcontroller and wd-raptor disk in raid-10. Short answer: proxmox win!
Code:
Proxmox (write)                                     
Profil:   swappen    installieren    word    photoshop   kopieren   f-prot    Index
###################################################################################
Proxmox   91 MB/s      160 MB/s    174 MB/s   148 MB/s   222 MB/s   139 MB/s   147
Other     54 MB/s       36 MB/s    210 MB/s    73 MB/s   350 MB/s    21 MB/s    77
XP-Native 45 MB/s       31 MB/s    103 MB/s    79 MB/s   304 MB/s    14 MB/s    59
The proxmox-version was 1.5 (kvm-0.12.3).
Other mean the well known virtualization software which cost a lot of money.
client was an WinXP, benchmark was h2benchw.
Nice was also, that the io native on winxp (installed on the host) slower is!! One reason more to use free software...

Udo
 
nano /etc/qemu-server/101.conf
scsi0:blahblah:vm-101-disk-1,cache=none
Tried this and got native drive performance in both VMs. I think this option should be defaulted in proxmox, because the speed increase is dramatic! Why didn't you make it default??
I will test LVM and raw file storage modes with this option now.
 
Last edited:
nano /etc/qemu-server/101.conf
scsi0:blahblah:vm-101-disk-1,cache=none
Tried this and got native drive performance in both VMs. I think this option should be defaulted in proxmox, because the speed increase is dramatic! Why didn't you make it default??
I will test LVM and raw file storage modes with this option now.
Hi,
why you use scsi? I think virtio (or ide) is better supportet (widly used).

Udo
 
Hi,
why you use scsi? I think virtio (or ide) is better supportet (widly used).

Udo
Does not make difference. Tried all 3 modes with Windows VM, absolutely no difference and same bad performance. So I'm recovering this VM right now to test it with "cache=none" option.
 
yes, that´s the plan.
 
Hi to all, some one know the result? I have the same problem but in the following server, IBM x3650M3 with 16GB of Ram and I configured :
2 Windows server 2003
1 A distribution named SME server 7.4 and the write process in very slow in all VKM but with the different that the Windows 2003 VKM are with virtio HD but the SME with IDE

Question how can I change the IDE to Virtio into a SME server?

Thanks in Advance,
Eviny
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!