Proxmox HDD performance slow after sysprep in Windows guests

croeper

New Member
Nov 17, 2009
14
0
1
Aachen
www.berke.biz
Hello,

we started migrating to ProxmoxVE about two months ago for our server virtualization needs and are very pleased. We began migrating existing VMware Server guest machines using mrshark's excellent guide in the forum or in the wiki.
The Windows 2003/2008 guest machines are running fine with IDE disk controller and e1000 network card (after a simple to resolve issue with the network: beware, if you clone the MAC address with your machine as well, make sure to shut down the SystemRescueCD machine before starting the new machine ).
Next step was creating a new template machine (Windows 2003, 2008 not yet tested) as a base for cloning new installs. We saved a DriveSnapshot image from that machine an sysprep'd the machine (switched to sysprep from NewSID lately, because of this post by Mark Russinovich). So far so well.
But (of course there's a but, that's why there's a post) in the first place the resulting machine was very slow, way slower than the template or the migrated machines. After some fine tuning in the sysprep.inf file the problem seemed to be solved, but it really was not; using the machine and installing software made the machine slow again.
For sure I know there could be several reasons for this, but in short my findings and why:

* All machines (migrated or cloned) have installed combinations of different but same software (SQL Server DB, OracleDB, ERP software etc.), and it is not that the clones get slow after installing a particular software, sometimes there are slow right after sysprep, sometimes after installing an SQL Server ServicePack, sometimes after importing an Oracle database dump.
* All machines (migrated or cloned) run with the same virtual hardware, say IDE controller and e1000 network, in all machines the Intel Network Driver is installed.
* The slow hdd performance is tested with the c't tool h2benchw, a migrated machine (fast) has something like
Code:
Interface-Transferrate mit Blockgröße 128 Sektoren bei 0.0% der Kapazität:
      Sequenzielle Leserate Medium (ungebremst): 168860 KByte/s
      Sequenzielle Leserate Read-Ahead (Verzögerung: 0.42 ms): 325893 KByte/s
      Wiederholtes sequenzielles Lesen ("Coretest"): 379077 KByte/s
for Windows 2003 running OracleDB or
Code:
Interface-Transferrate mit Blockgröße 128 Sektoren bei 0.0% der Kapazität:
      Sequenzielle Leserate Medium (ungebremst): 196817 KByte/s
      Sequenzielle Leserate Read-Ahead (Verzögerung: 0.36 ms): 389657 KByte/s
      Wiederholtes sequenzielles Lesen ("Coretest"): 421334 KByte/s
for Windows 2008 running SQL ServerDB. In contrast a cloned Windows 2003 with nothing spectacular running (slow) has this
Code:
Interface-Transferrate mit Blockgröße 128 Sektoren bei 0.0% der Kapazität:
      Sequenzielle Leserate Medium (ungebremst): 16138 KByte/s
      Sequenzielle Leserate Read-Ahead (Verzögerung: 4.36 ms): 16076 KByte/s
      Wiederholtes sequenzielles Lesen ("Coretest"): 16905 KByte/s
. This is factor ten and worse.
* The original template (before sysprep) is as fast as some migrated machine, so the template base is OK.
* I did several cross tests with some machines running, only one machine running etc. Basically the results were the same, only minor differences in the values.
* There is no significant IO Delay in Proxmox, about 0-5%, mostly around 1-2%.
* I am very sure it has something to do with sysprep, but why only together with Proxmox? It worked fine to sysprep a machine with VMware Server; and searching the Internet did not reveal this kind of problems with sysprep in the first place, or do I miss something?

The output of pveversion is as follows
Code:
logos:~# pveversion -v
pve-manager: 1.5-5 (pve-manager/1.5/4627)
running kernel: 2.6.24-10-pve
proxmox-ve-2.6.24: 1.5-21
pve-kernel-2.6.24-10-pve: 2.6.24-21
pve-kernel-2.6.24-9-pve: 2.6.24-18
pve-kernel-2.6.24-8-pve: 2.6.24-16
qemu-server: 1.1-11
pve-firmware: 1.0-3
libpve-storage-perl: 1.0-8
vncterm: 0.9-2
vzctl: 3.0.23-1pve8
vzdump: 1.2-5
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.11.1-2
All this is running on an Intel Modular Server System MFSYS25 with a compute module MFS5520VI with 24 GB RAM and 6 Seagate SAS 500 GB drives in a RAID6 and Proxmox install itself on a SSD. So apart from these cloned sysprep machines, everything is lighting fast.

I start to get out of ideas, so my question is: Does anybody has the same problems somehow, or some idea what to do to get the clones up and running fast again (would like to avoid to reinstall, and would like to have a reason for the sysprep desaster)? If more information is needed, I sure can post it, but for now I am not sure what might be relevant.
Thanks for any help in advance, kind regards

christoph
 
just a quick guess after a long day: do you have identical mac addresses somewhere? check /etc/qemu-server/VMID.conf files.
 
just a quick guess after a long day: do you have identical mac addresses somewhere? check /etc/qemu-server/VMID.conf files.

I checked all our *.conf files, but looks okay so far
Code:
vlan0: e1000=xx:xx:xx:3F:1A:65
vlan0: e1000=xx:xx:xx:3F:1A:66
vlan0: e1000=xx:xx:xx:3F:1A:67
vlan0: e1000=xx:xx:xx:3F:1A:6A
vlan0: e1000=xx:xx:xx:3F:1A:79
vlan0: e1000=xx:xx:xx:3F:1A:7A
vlan1: e1000=xx:xx:xx:3F:1A:7B
vlan0: e1000=xx:xx:xx:3F:1A:C7
vlan1: e1000=xx:xx:xx:5B:E4:1C
vlan1: e1000=xx:xx:xx:A5:76:93
vlan0: e1000=xx:xx:xx:3F:1A:0C
vlan0: e1000=xx:xx:xx:3F:1A:0E
(ubuntu-openbravo-svr)vlan0: virtio=xx:xx:xx:3F:1A:FA
A typical .conf file of a migrated, working (fast) machine looks like
Code:
name: **-***101
ide2: none,media=cdrom
bootdisk: ide0
ostype: w2k3
memory: 2048
sockets: 1
boot: cd
freeze: 0
cpuunits: 1000
acpi: 1
kvm: 1
ide0: inforMachines:vm-101-disk-1
onboot: 0
cores: 1
vlan0: e1000=xx:xx:xx:3F:1A:65
description: d.velop 6.1.1
A typical .conf file of a cloned, non working (slow) machine looks like
Code:
name: **-***_121
ide2: none,media=cdrom
ostype: w2k3
ide0: inforMachines:vm-121-disk-1
memory: 2048
sockets: 1
vlan0: e1000=xx:xx:xx:3F:1A:79
boot: c
freeze: 0
cpuunits: 1000
acpi: 1
kvm: 1
bootdisk: ide0
Just asking, can some MAC mismatch cause a slow harddisk performance? Okay, in the holistic computer world, everything can have to do with everything (DNA)... :eek:
Is it important that there is no cores and onboot entry in the second config, or does it just mean that I never pressed the save-button on the first Status page in ProxmoxVE-web?
Best regards,

christoph
 
Is it important that there is no cores and onboot entry in the second config, or does it just mean that I never pressed the save-button on the first Status page in ProxmoxVE-web?

Yes, it is not important. I can't see a real difference in the config.
 
Although nobody seems to have the same problems (in spite of some idea for a solution), inspired by the thread Paravirtualized driver for Windows I tested and installed the Virtio SCSI-block driver, and this not only solved the problem (tests looking good), it even speed-ed up the HDD performance.
In short I used the tip in the thread by tom (adding extra virtio hdd->install driver->delete and switch existing hdd to virtio->reboot) to get an existing (sysprep'd) Proxmox VM up and running, although the drivers in the ISO provided in the thread did not work for me. Instead I used the drivers from the linux-kvm site resp. the ISO here (with additional interesting links). For new installs I recommend slipstreaming the driver into the Windows installation, see the same Proxmox forum thread from above.
The h2bench values are now way better even on sysprep'd machines, like
Code:
Interface-Transferrate mit Blockgröße 128 Sektoren bei 0.0% der Kapazität:
Sequenzielle Leserate Medium (ungebremst): 171767 KByte/s
Sequenzielle Leserate Read-Ahead (Verzögerung: 0.41 ms): 456425 KByte/s
Wiederholtes sequenzielles Lesen ("Coretest"): 451686 KByte/s
We are still evaluating the new drivers, but all the test now look very promising, seems that our performance problem is solved. Anyway I still have no idea why we got into this in the first place by using sysprep and the IDE hdd controller in Proxmox KVM? :confused:
Best regards

christoph
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!