Windows disk IO Performance

e100

Renowned Member
Nov 6, 2010
1,268
46
88
Columbus, Ohio
ulbuilder.wordpress.com
I have been using Windows with the Fedora virtio drives for a couple years now.
One thing that has bothered me is even with the virtio drivers disk IO is still limited inside of the VM.

I have observed that writing to multiple virtio disks from within windows can have a total IO throughput twice as fast as writing to a single disk.
So I decided to do an experiment with my 2.0 RC1 setup.

My disk storage is DRBD over Infiniband to Areca 1880ix-12 4GB cache with 6 500GB Segate 7200 RPM SATA III disks in RAID 5.
Disks are setup with cache=none

First I benchmarked a single virtio disk:
singledisk.png

Next I added three virtio disks to this same Windows VM.
All three disks are on the same LVM volume as the test above.
In windows I configured the three disks as a software stripe.

Benchmark results of the striped volume:
stripeddisk.png

In both examples the data is being written to the exact same disks.
But when windows thinks it has three disks rather than one performance is drastically increased.

Is KVM allowing more IO because of the three disks? Maybe one thread per emulated disk or something?
or
Is windows performing more IO because it thinks it has three disks?

Seeing that it is possible to really push IO to a windows VM makes me wonder if there is some performance tuning that could be done to improve throughput on a single disk rather than doing a software stripe in windows.

Anyone have any ideas on where to look for possible performance enhancing tweaks?
 
Could it be that ChristalDisk may be not very accurate on VMs?

I created a Windows VM with 2 disks setup as a mirror and Christal was reporting performance twice as fast that with a single disk... this would be ok for stripped volumes but not for mirrored ones.

I have noticed that the virtio driver performance are about 20% better than the IDE one (according to Christal)... A few people however reported stability issues with the virtio driver. I would like to do some stress test on that VM to effectively test the stability of the driver.
 
I've observed these results with other tools.
I often use Sdelete -c to zero unused space on my disks so my backups are smaller.

If I run it on the C:\ and log into Proxmox and run "vmstat 1" I can see the block io moving along at one rate.
Then if I also start sdelete for the D:\ at the same time as the C:\ vmstat 1 will start reporting a rate twice as fast.
This keeps scaling up until you hit the limit of your underlying storage.

It does seem odd that mirrored disks were twice as fast.
But that does not necessarily mean the result is incorrect.
Writing to multiple disks at the same time is faster.
Maybe by having multiple disks windows is performing the IO in a different manner thus improving performance.

As fas as stability goes I have not had an issue with virtio disk drivers on windows for a very long time.
Older drivers were buggy, newer ones for a long time have been great.
I have some VMs running for over two years with virtio drivers.
The virtio Network drivers have been an issue in the past and I have not tested them recently.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!