PVE 3 kills Windows 7 Performance

AndyL

Member
Jul 28, 2011
61
0
6
Previously, I had two Windows 7 guests, one Windows XP and one NAS4Free guest. All these were running on a Dell R210, with a single SATA drive for the VM's and operating system, and another SATA drive for storage. With PVE2.3, the performance was very good. VM's would start and stop quickly, and were very responsive when VNC'd or RDP'd to. The Nas4Free guest wasn't being used, although it was running. As PVE 3 had arrived, I decided to update to the latest version following the instructions on this site. All appeared to go OK, but after rebooting my Window's VM's were running very slowly. One of them would take around 5 minutes to start up, whereas before it was taking maybe 1 and a half. IO seemed very slow on the VM's too. I wondered if things might be better with the virtio drivers as opposed to the standard IDE, but it made no difference.
I then thought that maybe something had gone wrong with the update, so I copied all my conf files and disk files off the server, installled PVE3 from a fresh ISO onto a new disk and ran the update, dist-upgrade. I then copied the conf files and the disk files back into the correct locations. This made no difference. Everything is still as slow as it was before. Virtio or IDE makes no difference. I have noticed that on the node CPU Usage graph, I see quite a lot of IO Delay - the red line - that I do n ot recall seeing before.

Here is my pveversion:

pve-manager: 3.0-23 (pve-manager/3.0/957f0862)
running kernel: 2.6.32-20-pve
proxmox-ve-2.6.32: 3.0-100
pve-kernel-2.6.32-20-pve: 2.6.32-100
lvm2: 2.02.95-pve3
clvm: 2.02.95-pve3
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-4
qemu-server: 3.0-20
pve-firmware: 1.0-22
libpve-common-perl: 3.0-4
libpve-access-control: 3.0-4
libpve-storage-perl: 3.0-8
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-13
ksm-control-daemon: 1.1-1


Here are the conf file for the windows guests:

boot: dc
bootdisk: ide0
cores: 4
cpuunits: 5000
ide0: local:100/vm-100-disk-5.qcow2,format=qcow2,size=40G
ide2: none,media=cdrom
ide3: local:100/vm-100-disk-3.raw,format=raw,size=1073664
memory: 3000
name: SystemManager
net0: e1000=D2:D3:17:2C:F1:C7,bridge=vmbr0
onboot: 1
ostype: win7
sockets: 1
startup: order=1,up=20,down=60


boot: dc
bootdisk: ide0
cores: 4
ide2: none,media=cdrom
memory: 3000
name: EngineersPC
net0: e1000=D2:10:DF:17:5A:05,bridge=vmbr0
onboot: 1
ostype: win7
sockets: 1
startup: order=3,up=20,down=60
virtio0: local:300/vm-300-disk-3.qcow2,format=qcow2,size=40G
virtio1: local:300/vm-300-disk-2.raw,format=raw,size=1073664

args: -serial /dev/ttyS0
boot: dc
bootdisk: ide0
cores: 1
ide2: none,media=cdrom
memory: 1200
name: SimPC
net0: e1000=BE:B7:A2:45:EC:6C,bridge=vmbr0
onboot: 1
ostype: wxp
scsihw: virtio-scsi-pci
sockets: 1
startup: order=4,up=30,down=0
usb0: host=104b:0001
virtio0: local:200/vm-200-disk-2.qcow2,format=qcow2,size=25G
virtio1: local:200/vm-200-disk-1.raw,format=raw,size=1073664
virtio2: local:200/vm-200-disk-3.qcow2,format=qcow2,size=8G

Something has really killed the performance of the Windows hosts. I'm no expert, so all help/advice/requests for further information is very welcome.

Andy
 
Previously, I had two Windows 7 guests, one Windows XP and one NAS4Free guest. All these were running on a Dell R210, with a single SATA drive for the VM's and operating system, and another SATA drive for storage. With PVE2.3, the performance was very good. VM's would start and stop quickly, and were very responsive when VNC'd or RDP'd to. The Nas4Free guest wasn't being used, although it was running. As PVE 3 had arrived, I decided to update to the latest version following the instructions on this site. All appeared to go OK, but after rebooting my Window's VM's were running very slowly. One of them would take around 5 minutes to start up, whereas before it was taking maybe 1 and a half. IO seemed very slow on the VM's too. I wondered if things might be better with the virtio drivers as opposed to the standard IDE, but it made no difference.
I then thought that maybe something had gone wrong with the update, so I copied all my conf files and disk files off the server, installled PVE3 from a fresh ISO onto a new disk and ran the update, dist-upgrade. I then copied the conf files and the disk files back into the correct locations. This made no difference. Everything is still as slow as it was before. Virtio or IDE makes no difference. I have noticed that on the node CPU Usage graph, I see quite a lot of IO Delay - the red line - that I do n ot recall seeing before.

Here is my pveversion:

pve-manager: 3.0-23 (pve-manager/3.0/957f0862)
running kernel: 2.6.32-20-pve
proxmox-ve-2.6.32: 3.0-100
pve-kernel-2.6.32-20-pve: 2.6.32-100
lvm2: 2.02.95-pve3
clvm: 2.02.95-pve3
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-4
qemu-server: 3.0-20
pve-firmware: 1.0-22
libpve-common-perl: 3.0-4
libpve-access-control: 3.0-4
libpve-storage-perl: 3.0-8
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-13
ksm-control-daemon: 1.1-1


Here are the conf file for the windows guests:

boot: dc
bootdisk: ide0
cores: 4
cpuunits: 5000
ide0: local:100/vm-100-disk-5.qcow2,format=qcow2,size=40G
ide2: none,media=cdrom
ide3: local:100/vm-100-disk-3.raw,format=raw,size=1073664
memory: 3000
name: SystemManager
net0: e1000=D2:D3:17:2C:F1:C7,bridge=vmbr0
onboot: 1
ostype: win7
sockets: 1
startup: order=1,up=20,down=60


boot: dc
bootdisk: ide0
cores: 4
ide2: none,media=cdrom
memory: 3000
name: EngineersPC
net0: e1000=D2:10:DF:17:5A:05,bridge=vmbr0
onboot: 1
ostype: win7
sockets: 1
startup: order=3,up=20,down=60
virtio0: local:300/vm-300-disk-3.qcow2,format=qcow2,size=40G
virtio1: local:300/vm-300-disk-2.raw,format=raw,size=1073664

args: -serial /dev/ttyS0
boot: dc
bootdisk: ide0
cores: 1
ide2: none,media=cdrom
memory: 1200
name: SimPC
net0: e1000=BE:B7:A2:45:EC:6C,bridge=vmbr0
onboot: 1
ostype: wxp
scsihw: virtio-scsi-pci
sockets: 1
startup: order=4,up=30,down=0
usb0: host=104b:0001
virtio0: local:200/vm-200-disk-2.qcow2,format=qcow2,size=25G
virtio1: local:200/vm-200-disk-1.raw,format=raw,size=1073664
virtio2: local:200/vm-200-disk-3.qcow2,format=qcow2,size=8G

Something has really killed the performance of the Windows hosts. I'm no expert, so all help/advice/requests for further information is very welcome.

Andy

Hi, kvm version is same (qemu 1.4) on proxmox 2.3 and proxmox 3.0.
So, maybe is it a kernel driver problem with your raid controller ?
 
Thanks for the reply. There is no RAID controller. It's just a single SATA drive. The hardware is the same - I haven't changed the server at all. It's a standard Dell R210 II, so maybe it's something in the kernel that's changed with regard to the Dell hardware?
 
With all VM's running:

pveperf
CPU BOGOMIPS: 55872.00
REGEX/SECOND: 1253976
HD SIZE: 57.09 GB (/dev/mapper/pve-root)
BUFFERED READS: 73.99 MB/sec
AVERAGE SEEK TIME: 63.84 ms
FSYNCS/SECOND: 6.68
DNS EXT: 101.36 ms
DNS INT: 106.76 ms

And with all VM's shut down:

pveperf
CPU BOGOMIPS: 55872.00
REGEX/SECOND: 1667433
HD SIZE: 57.09 GB (/dev/mapper/pve-root)
BUFFERED READS: 106.70 MB/sec
AVERAGE SEEK TIME: 10.36 ms
FSYNCS/SECOND: 31.52
DNS EXT: 109.46 ms
DNS INT: 99.37 ms
 
FSYNCS/SECOND: 31.52 is INCREDIBLE slow, a simple sata 7.200 rpm is more than 800!
There is something working really badly in your I/O, regardless the kind of VM you are going to run there. Try to investigate and fix that first. Is it a stock installation with Ext3 file system? Have you played with some fstab parameter? Software raid or any strange BIOS setup regarding hard disk? Have you changed HD parameter with some tool? And so on...
 
I agree. It's very slow.
In answer to to your questions, there have been no modifications whatsoever to the machine BIOS, filesystem or anything else. It's all stock.
This morning I installed 2.3 on the same hardware and disk as I have been using for testing all the time. Here are the results:

CPU BOGOMIPS: 55876.56
REGEX/SECOND: 1521237
HD SIZE: 57.09 GB (/dev/mapper/pve-root)
BUFFERED READS: 100.92 MB/sec
AVERAGE SEEK TIME: 10.37 ms
FSYNCS/SECOND: 34.48
DNS EXT: 112.70 ms
DNS INT: 113.25 ms (t)

So - it appears that 3.0 or 2.3 give me the same results! From my first post, you can see that I have been using a different disk for test purposes than I originally had my 2.3 updated to 3.0 system on. My test disk is a Seagate Barracuda 250GB 7200rpm SATA disk. I have gone back to my original Dell (Western Digital) SATA 500GB 7200 rpm one. Here is the result for that disk, with no VM's running:

CPU BOGOMIPS: 55870.40
REGEX/SECOND: 1700192
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 122.53 MB/sec
AVERAGE SEEK TIME: 9.94 ms
FSYNCS/SECOND: 1774.13
DNS EXT: 115.03 ms
DNS INT: 209.50 ms (t)

So you can see that there is a massive difference in the fsyncs number with this disk. I can only conclude that there is some compatibility or hardware problem with the disk I have been using for testing. So the hard disk I have been using for test purposes has been a bit of a red herring. I suspect that the reason for the slowness in the VM's I have been seeing is due to some other factors. I still see a lot of iowait's during startup. The intention ultimately is to run from an SSD, so that should help in that situation. I may have been blaming the update to v3 prematurely ... apologies to the developers and thanks to the community for the help.

And bye the way, what does the parameter
boot: dc
mean in the conf files?
 
Just to finish this thread, I replaced the disc with an SSD. Here is pveperf now:

CPU BOGOMIPS: 55869.44
REGEX/SECOND: 1592584
HD SIZE: 54.88 GB (/dev/mapper/pve-root)
BUFFERED READS: 216.36 MB/sec
AVERAGE SEEK TIME: 0.14 ms
FSYNCS/SECOND: 3762.72
DNS EXT: 81.54 ms
DNS INT: 95.06 ms (tdh)

Somewhat better! VM startup and performance is _much_ better!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!