Win2k extremely slow after migration from vmware

jhammer

Member
Dec 21, 2009
55
1
6
I recently migrated a win2k VM from vmware to proxmox using this HOWTO:

http://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE#VMware_to_Proxmox_VE_.28KVM.29

It is currently running very slowly. The physical host has 2 quad-core processors and 24GB of RAM. I've only got this and one other VM running on it so far. The other VM is linux and I don't notice any slowness with it.

Are there any recommendations on improving performance on the win2k VM?

Thanks.
 
I, by no means, am an expert nor do I claim to be (I don't even play one on T.V. :)...

However, for anyone to be of any use to you, you need to go into a little more detail about what you consider "slow"... Do you have actual before/after benchmarks to share? What part of the server is slow? It could be something as simple as a DNS issue, or as advanced as needing to tweak some parameters in your system to get it to work properly...
 
Well, it isn't a networking issue. Launching applications local to the VM takes a 'long time'. For example, If I click on the start menu it can take 10 to 15 seconds to pop up. Applications generally take longer than that to appear...sometimes closer to a minute. When I was running this same VM under vmware the start menu would pop up immediately upon clicking it and applications took only a few seconds.

I don't really have any other benchmarks.

I'm wondering if it is disk access. When I added the disk in Proxmox I used the IDE bus which the HOWTO I referenced inferred I would need to use. I'm wondering if I should be installing a different driver or if something in the conversion could have caused it. Perhaps I can try the virtio storage drivers (http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers). I'm not sure if those work for win2k or not. Also, is there a performance advantage (in terms of speed) to using qcow2 over raw format for the image file?

While looking into it I noticed that the disk is quite fragmented...I'll see if defragmenting helps.
 
Hi,
my experience is:
1) use raw disks for winmdows speed, not qcow2
2) use cache=none by manual editing the /etc/qemuserver/vmid.conf file like this:
virtio0: vms02:125/vm-125-disk-1.raw,cache=none
Afterwards performance is ok even with IDE
3) if you want to double that performance again, then use virtio
 
I don't really have any other benchmarks.

I'm wondering if it is disk access. When I added the disk in Proxmox I used the IDE bus which the HOWTO I referenced inferred I would need to use. I'm wondering if I should be installing a different driver or if something in the conversion could have caused it. Perhaps I can try the virtio storage drivers (http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers). I'm not sure if those work for win2k or not.
I had disk performance problems with Windows 2003, tested with h2benchw, but switching to Virtio solved the problem. Although not tested there is a storage XP/32-bit driver that is said to work with Windows 2000.
Regards,

christoph
 
And you have done some benchmarks to show that?
Yes, I have verified it with a windows vm and iometer. Raw instead of qcow was a big performance boost and nocache significantly boosted the result again.
Unfortunately I did not document the results and I don't remember the exact numbers.
 
this is really interesting - would that caching setting apply to block attached physical disks as well? I've got a vm guest running OpenFiler with several physically mapped SATA disks in a virtualized mdraid. I'd love to be able to tweak performance a bit. Right now it's not terrible (it's primarily just read-only), but write performance is nothing to write home about.

I checked in my conf file and I am using the virtio driver as for my boot disk image - would I be able to append the caching statement to my physically mapped disks?

ide0: /dev/sdc
ide1: /dev/sdd
ide2: /dev/sde
ide3: /dev/sdb