Hello,
I have a test system on Proxmox 2.0 Beta.
After moving a windows 2008R2 vm from promox 1.9 to 2.0 successfully.
I notice the windows startup prompted for driver for the new virtio io controller, the old virtio io driver is no longer accepted, therefore I downloaded the latest virtio-win-0.1-15.iso from fedora website.
Everything works smoothly with the new driver until I ran a IO benchmark / load test.
The name of the programe is: CrystalDiskMark 3.0.1 x64 (crystalmark.info)

This screenshot is captured seconds before it crashes. I have ran the same test 10 times and all 10 test crashed at 4k random write test.
(Hardware LSI RAID 0, Stripe size 64k)
The first 2 test: sequencial and 512k random read/write went successfully.
On the 3rd test: random 4k read test went successfully, however on the write test the blue screen of death came out.


Below is the screencap of the cpu, io, mem utilization graph.

I then turn to my other test system (same RAID disk setup) which is using Proxmox 1.9, all the test went successfully without any problems.
note that on 1.9 windows is using the older virtio io driver.
It seems that the blue screen only came out under heavy IO write at 4K (since my stripe size is 64k, 4k will only write to 1 disk)
Could it be issue with the latest virtio io driver?
root@vps:~# pveversion -v
pve-manager: 2.0-4 (pve-manager/2.0/52943683)
running kernel: 2.6.32-6-pve
proxmox-ve-2.6.32: 2.0-46
pve-kernel-2.6.32-6-pve: 2.6.32-46
lvm2: 2.02.86-1pve1
clvm: 2.02.86-1pve1
corosync-pve: 1.4.1-1
openais-pve: 1.1.4-1
libqb: 0.5.1-1
redhat-cluster-pve: 3.1.7-1
pve-cluster: 1.0-7
qemu-server: 2.0-1
pve-firmware: 1.0-13
libpve-common-perl: 1.0-5
libpve-access-control: 1.0-1
libpve-storage-perl: 2.0-4
vncterm: 1.0-2
vzctl: 3.0.29-3pve1
vzdump: 1.2.6-1
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 0.15.0-1
ksm-control-daemon: 1.1-1
I have a test system on Proxmox 2.0 Beta.
After moving a windows 2008R2 vm from promox 1.9 to 2.0 successfully.
I notice the windows startup prompted for driver for the new virtio io controller, the old virtio io driver is no longer accepted, therefore I downloaded the latest virtio-win-0.1-15.iso from fedora website.
Everything works smoothly with the new driver until I ran a IO benchmark / load test.
The name of the programe is: CrystalDiskMark 3.0.1 x64 (crystalmark.info)

This screenshot is captured seconds before it crashes. I have ran the same test 10 times and all 10 test crashed at 4k random write test.
(Hardware LSI RAID 0, Stripe size 64k)
The first 2 test: sequencial and 512k random read/write went successfully.
On the 3rd test: random 4k read test went successfully, however on the write test the blue screen of death came out.


Below is the screencap of the cpu, io, mem utilization graph.

I then turn to my other test system (same RAID disk setup) which is using Proxmox 1.9, all the test went successfully without any problems.
note that on 1.9 windows is using the older virtio io driver.
It seems that the blue screen only came out under heavy IO write at 4K (since my stripe size is 64k, 4k will only write to 1 disk)
Could it be issue with the latest virtio io driver?
root@vps:~# pveversion -v
pve-manager: 2.0-4 (pve-manager/2.0/52943683)
running kernel: 2.6.32-6-pve
proxmox-ve-2.6.32: 2.0-46
pve-kernel-2.6.32-6-pve: 2.6.32-46
lvm2: 2.02.86-1pve1
clvm: 2.02.86-1pve1
corosync-pve: 1.4.1-1
openais-pve: 1.1.4-1
libqb: 0.5.1-1
redhat-cluster-pve: 3.1.7-1
pve-cluster: 1.0-7
qemu-server: 2.0-1
pve-firmware: 1.0-13
libpve-common-perl: 1.0-5
libpve-access-control: 1.0-1
libpve-storage-perl: 2.0-4
vncterm: 1.0-2
vzctl: 3.0.29-3pve1
vzdump: 1.2.6-1
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 0.15.0-1
ksm-control-daemon: 1.1-1
Last edited: