pvetest 1.5.8

udo

Distinguished Member
Apr 22, 2009
5,977
201
163
Ahrensburg; Germany
Hi,
i have made some test with the new kvm-version on pvetest (0.12.3-1).
After a short trouble because of version missmatch the online migration work now also on amd-cpus with windows-guest!
That's very good. Also the online migration of a windows-vm with two cpus work very well.

My test with IO on smp-win-vm looks like before... (for IO-reasons it's better to use only one cpu). Here i have made the test with h2benchw and kernel 2.6.32-7 and 2.6.24-22.

Udo
 
Any idea if this new kvm-qemu will solve the SMP migration problem with Ubuntu 9.10?
 
My test with IO on smp-win-vm looks like before...

There is also a new option for kvm drives (kvm --help")

aio=threads|native

Would be interesting if that has any performance impact. But I had no time to test that so far.
 
Any idea if this new kvm-qemu will solve the SMP migration problem with Ubuntu 9.10?
Hi,
i have made some tests... it's looks a little bit better - but not much.
One client ubuntu-9.10-amd64 with 2 cpus. Online migration between master and node.
master to node -> work
back to master -> hang (cpu 0%)
back to node -> work again (curios)
back to master -> hang (cpu 100%)
back to node -> hang (cpu 100%) - DEAD!

Similiar happens with only one cpu. Sometimes the vm is dead after the first migration back to the master.
Also with 10.4-desktop-beta2-amd64 it's the same effekt.
On the testmachines i have two different CPUs (Phenom 620 and Athlon X2 5200+).
If i tried the same on two equal CPUs (with stable pve 1.5) the online migration hang immediately!
Perhaps with the same CPUs it works better on kvm 0.12.3?!

Udo
 
There is also a new option for kvm drives (kvm --help")

aio=threads|native

Would be interesting if that has any performance impact. But I had no time to test that so far.
Hi,
all test made with h2benchw (profile install) with three runs for each config and kernel 2.6.24.
With one cpu none (or only small) difference in io between aio=threads/native + none aio.
eg. 86MB/s (varying from 59-140MB)

With 2 CPUs
aio=threads: 33MB/s (minor differences between the runs)
aio=native: 23-31MB/s
without aio: 21-41MB/s

It's seems to be usefull to use the "aio=threads"-flag. The results are equablier.
Never the less, the performance with only one cpu are 2-3 times better...

I try one test with 2.6.32-er kernel and aio=threads tomorror.

Udo
 
Hi,
all test made with h2benchw (profile install) with three runs for each config and kernel 2.6.24.
With one cpu none (or only small) difference in io between aio=threads/native + none aio.
eg. 86MB/s (varying from 59-140MB)

With 2 CPUs
aio=threads: 33MB/s (minor differences between the runs)
aio=native: 23-31MB/s
without aio: 21-41MB/s

It's seems to be usefull to use the "aio=threads"-flag. The results are equablier.
Never the less, the performance with only one cpu are 2-3 times better...

I try one test with 2.6.32-er kernel and aio=threads tomorror.

Udo
Hi,
now i have made some tests with 2.6.32, 2cpu-win-vm and it's looks not so good...
6 tests with aio=threads (3 tests, reboot vm, again 3 tests) and 3 tests with aio=native
aio=threads: 10.6MB/s 5MB/s 9.5MB/s
aio=threads: 5.7MB/s 5.3MB/s 4.2MB/s
aio=native: 8.6MB/s 7.1MB/s 6.7MB/s

reboot server with 2.6.24:
aio=threads: 46.5MB/s 20.8MB/s 19.4MB/s

Only to be sure, a reboot with 2.6.32 and only one cpu for the VM:
aio=threads: 110.8MB/s

Like before, the tests are done with h2benchw and take the values of profile install.
There are none other VMs on the hosts running. But is the same raid for the OS.
But the raid are not to slow:
Code:
pveperf /var/lib/vz
CPU BOGOMIPS:      20871.50
REGEX/SECOND:      554328
HD SIZE:           444.21 GB (/dev/mapper/pve-data)
BUFFERED READS:    299.07 MB/sec
AVERAGE SEEK TIME: 7.33 ms
FSYNCS/SECOND:     4086.59
DNS EXT:           77.01 ms
DNS INT:           0.92 ms

Udo
 
Udo,

Can you post the qemu-server configs for your test VM? I would like to compare on my end.

Thanks.
 
Udo,

Can you post the qemu-server configs for your test VM? I would like to compare on my end.

Thanks.
Hi,
here comes the config. Its the original VM - the test-vm is a clone (with local disks and two sockets)
Code:
name: knecht2
sockets: 1
vlan0: e1000=F6:E1:E2:E4:93:4E
bootdisk: ide0
ostype: wxp
memory: 1024
boot: c
freeze: 0
cpuunits: 1500
acpi: 1
kvm: 1
ide1: none,media=cdrom
onboot: 0
ide0: vg_ams200_fc_1:vm-105-disk-1
ide2: vg_ams200_fc_1:vm-105-disk-2

I will try the same test with virtio-disk...

Udo
 
...
I will try the same test with virtio-disk...
Hi,
after a lot more tests (on another node, with faster raid and the raid only for the vm)...
It seems that the IO-performance under windows with smp depends on fortune (or luck, or what else). If only run the IO-Prozess the values are sometimes not so bad, but if there another process, like the task-manager to show the cpu usage, the io-performance be worse. Up to very worse. I've got a notion that the performance drop if the io-process change the cpu (often).
It's happens with the 2.6.32 and also with the 2.6.24 kernel (I don't test the 2.6.18 yet).

Here the test-results (h2benchw -p -w 2cp_th_2.6.32_v1 2; profile install) on a virtio-disk (virtio-driver 4.3.0.17241) :
all values MB/s - different runs seperate with "|"
Code:
1 CPU             2.6.32: 488
1 CPU aio=threads 2.6.32: 517 | 573 | 569
2 CPU aio=threads 2.6.32: 333 |  78 |  28
2 CPU aio=native  2.6.32: 101 | 128 |  53
2 CPU aio=threads 2.6.32: 215 |  66 | 103 | 179 |  26 |  58 | 109
2 CPU aio=native  2.6.32:  70 |  39 |  3.7|  14
2 CPU aio=threads 2.6.24: 298 |  47 |  27 | 121 | 120 |  82 | 104
2 CPU cache=none  2.6.24:  55 |  92 | 102 | 114 | open task-manager: 67

Perhaps there are other switches for kvm to solve this problem?
My test kvm:
Code:
/usr/bin/kvm -monitor unix:/var/run/qemu-server/126.mon,server,nowait -vnc unix:/var/run/qemu-server/126.vnc,password -pidfile /var/run/qemu-server/126.pid -daemonize -usbdevice tablet -name knecht2 -smp sockets=2,cores=1 -nodefaults -boot menu=on,order=c -vga cirrus -tdf -localtime -rtc-td-hack -k de -drive file=/var/lib/vz/template/iso/vm-tools.iso,if=ide,index=1,media=cdrom -drive file=/var/lib/vz/images/126/vm-126-disk-1.raw,if=ide,index=0,boot=on -drive file=/var/lib/vz/images/126/vm-126-disk-2.raw,if=ide,index=2 -drive file=/var/lib/vz/images/126/vm-126-disk-3.raw,if=virtio,index=0,aio=threads -m 1024 -net tap,vlan=0,ifname=vmtab126i0,script=/var/lib/qemu-server/bridge-vlan -net nic,vlan=0,model=e1000,macaddr=F6:E1:E2:E4:93:4E

Udo