Previously, I had two Windows 7 guests, one Windows XP and one NAS4Free guest. All these were running on a Dell R210, with a single SATA drive for the VM's and operating system, and another SATA drive for storage. With PVE2.3, the performance was very good. VM's would start and stop quickly, and were very responsive when VNC'd or RDP'd to. The Nas4Free guest wasn't being used, although it was running. As PVE 3 had arrived, I decided to update to the latest version following the instructions on this site. All appeared to go OK, but after rebooting my Window's VM's were running very slowly. One of them would take around 5 minutes to start up, whereas before it was taking maybe 1 and a half. IO seemed very slow on the VM's too. I wondered if things might be better with the virtio drivers as opposed to the standard IDE, but it made no difference.
I then thought that maybe something had gone wrong with the update, so I copied all my conf files and disk files off the server, installled PVE3 from a fresh ISO onto a new disk and ran the update, dist-upgrade. I then copied the conf files and the disk files back into the correct locations. This made no difference. Everything is still as slow as it was before. Virtio or IDE makes no difference. I have noticed that on the node CPU Usage graph, I see quite a lot of IO Delay - the red line - that I do n ot recall seeing before.
Here is my pveversion:
pve-manager: 3.0-23 (pve-manager/3.0/957f0862)
running kernel: 2.6.32-20-pve
proxmox-ve-2.6.32: 3.0-100
pve-kernel-2.6.32-20-pve: 2.6.32-100
lvm2: 2.02.95-pve3
clvm: 2.02.95-pve3
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-4
qemu-server: 3.0-20
pve-firmware: 1.0-22
libpve-common-perl: 3.0-4
libpve-access-control: 3.0-4
libpve-storage-perl: 3.0-8
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-13
ksm-control-daemon: 1.1-1
Here are the conf file for the windows guests:
boot: dc
bootdisk: ide0
cores: 4
cpuunits: 5000
ide0: local:100/vm-100-disk-5.qcow2,format=qcow2,size=40G
ide2: none,media=cdrom
ide3: local:100/vm-100-disk-3.raw,format=raw,size=1073664
memory: 3000
name: SystemManager
net0: e1000=D23:17:2C:F1:C7,bridge=vmbr0
onboot: 1
ostype: win7
sockets: 1
startup: order=1,up=20,down=60
boot: dc
bootdisk: ide0
cores: 4
ide2: none,media=cdrom
memory: 3000
name: EngineersPC
net0: e1000=D2:10F:17:5A:05,bridge=vmbr0
onboot: 1
ostype: win7
sockets: 1
startup: order=3,up=20,down=60
virtio0: local:300/vm-300-disk-3.qcow2,format=qcow2,size=40G
virtio1: local:300/vm-300-disk-2.raw,format=raw,size=1073664
args: -serial /dev/ttyS0
boot: dc
bootdisk: ide0
cores: 1
ide2: none,media=cdrom
memory: 1200
name: SimPC
net0: e1000=BE:B7:A2:45:EC:6C,bridge=vmbr0
onboot: 1
ostype: wxp
scsihw: virtio-scsi-pci
sockets: 1
startup: order=4,up=30,down=0
usb0: host=104b:0001
virtio0: local:200/vm-200-disk-2.qcow2,format=qcow2,size=25G
virtio1: local:200/vm-200-disk-1.raw,format=raw,size=1073664
virtio2: local:200/vm-200-disk-3.qcow2,format=qcow2,size=8G
Something has really killed the performance of the Windows hosts. I'm no expert, so all help/advice/requests for further information is very welcome.
Andy
I then thought that maybe something had gone wrong with the update, so I copied all my conf files and disk files off the server, installled PVE3 from a fresh ISO onto a new disk and ran the update, dist-upgrade. I then copied the conf files and the disk files back into the correct locations. This made no difference. Everything is still as slow as it was before. Virtio or IDE makes no difference. I have noticed that on the node CPU Usage graph, I see quite a lot of IO Delay - the red line - that I do n ot recall seeing before.
Here is my pveversion:
pve-manager: 3.0-23 (pve-manager/3.0/957f0862)
running kernel: 2.6.32-20-pve
proxmox-ve-2.6.32: 3.0-100
pve-kernel-2.6.32-20-pve: 2.6.32-100
lvm2: 2.02.95-pve3
clvm: 2.02.95-pve3
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-4
qemu-server: 3.0-20
pve-firmware: 1.0-22
libpve-common-perl: 3.0-4
libpve-access-control: 3.0-4
libpve-storage-perl: 3.0-8
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-13
ksm-control-daemon: 1.1-1
Here are the conf file for the windows guests:
boot: dc
bootdisk: ide0
cores: 4
cpuunits: 5000
ide0: local:100/vm-100-disk-5.qcow2,format=qcow2,size=40G
ide2: none,media=cdrom
ide3: local:100/vm-100-disk-3.raw,format=raw,size=1073664
memory: 3000
name: SystemManager
net0: e1000=D23:17:2C:F1:C7,bridge=vmbr0
onboot: 1
ostype: win7
sockets: 1
startup: order=1,up=20,down=60
boot: dc
bootdisk: ide0
cores: 4
ide2: none,media=cdrom
memory: 3000
name: EngineersPC
net0: e1000=D2:10F:17:5A:05,bridge=vmbr0
onboot: 1
ostype: win7
sockets: 1
startup: order=3,up=20,down=60
virtio0: local:300/vm-300-disk-3.qcow2,format=qcow2,size=40G
virtio1: local:300/vm-300-disk-2.raw,format=raw,size=1073664
args: -serial /dev/ttyS0
boot: dc
bootdisk: ide0
cores: 1
ide2: none,media=cdrom
memory: 1200
name: SimPC
net0: e1000=BE:B7:A2:45:EC:6C,bridge=vmbr0
onboot: 1
ostype: wxp
scsihw: virtio-scsi-pci
sockets: 1
startup: order=4,up=30,down=0
usb0: host=104b:0001
virtio0: local:200/vm-200-disk-2.qcow2,format=qcow2,size=25G
virtio1: local:200/vm-200-disk-1.raw,format=raw,size=1073664
virtio2: local:200/vm-200-disk-3.qcow2,format=qcow2,size=8G
Something has really killed the performance of the Windows hosts. I'm no expert, so all help/advice/requests for further information is very welcome.
Andy