Performance of Win 2008 R2 in Hyper-V and KVM

docent

Renowned Member
Jul 23, 2009
94
1
73
Hi everyone,

I have two identical computers: HP DL180G5, E5405, 12GB RAM, several SATA disks.
I have installed Hyper-V 2008 R2 on the first server and Proxmox 2.0 on the second. On both servers I installed Windows 2008 R2 with 1 virtual CPU.
I ran Crystal Disk Mark on both VMs. The result You can see here w2k8r2onhyperv.png (Hyper-V) and here w2k8r2onkvm.png (KVM).
However, when I restore Firebird databases on both VMs I see different result: restore the database in the Hyper-V is more then three times faster than in KVM.
 
Last edited:
Post details about your storage setup, physical configuration and virtual disk config and VM config.
 
also, do not link to pages outside this forum, all logs and screenshots needs to stored here.
 
Code:
#cat 100.conf
name: Aurum
cpu: core2duo
sockets: 1
cores: 1
memory: 3072
ide0: vg1:vm-100-disk-1,cache=writeback
ide1: ram:100/vm-100-disk-1.qcow2
ide2: local:iso/virtio-win-0.1-15.iso,media=cdrom
ide3: vg2:vm-100-disk-1,cache=writeback
bootdisk: ide0
ostype: win7
net0: e1000=86:CF:2B:33:A0:A4,bridge=vmbr0
args: -no-hpet -no-kvm-pit-reinjection
Code:
# pveversion -v
pve-manager: 2.0-18 (pve-manager/2.0/16283a5a)
running kernel: 2.6.32-6-pve
proxmox-ve-2.6.32: 2.0-55
pve-kernel-2.6.32-6-pve: 2.6.32-55
lvm2: 2.02.88-2pve1
clvm: 2.02.88-2pve1
corosync-pve: 1.4.1-1
openais-pve: 1.1.4-1
libqb: 0.6.0-1
redhat-cluster-pve: 3.1.8-3
pve-cluster: 1.0-17
qemu-server: 2.0-13
pve-firmware: 1.0-14
libpve-common-perl: 1.0-11
libpve-access-control: 1.0-5
libpve-storage-perl: 2.0-9
vncterm: 1.0-2
vzctl: 3.0.29-3pve8
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.0-1
ksm-control-daemon: 1.1-1
Code:
# cat /var/log/dmesg
...
scsi0 : ahci
scsi1 : ahci
scsi2 : ahci
scsi3 : ahci
ata1: SATA max UDMA/133 abar m2048@0xfccff000 port 0xfccff100 irq 33
ata2: SATA max UDMA/133 irq_stat 0x00400040, connection status changed irq 33
ata3: SATA max UDMA/133 irq_stat 0x00400040, connection status changed irq 33
ata4: SATA max UDMA/133 irq_stat 0x00400040, connection status changed irq 33
ata_piix 0000:00:1f.5: version 2.13
ata_piix 0000:00:1f.5: PCI INT B -> GSI 19 (level, low) -> IRQ 19
ata_piix 0000:00:1f.5: MAP [ P0 -- P1 -- ]
ata_piix 0000:00:1f.5: setting latency timer to 64
scsi4 : ata_piix
scsi5 : ata_piix
ata5: SATA max UDMA/133 cmd 0xe000 ctl 0xdc00 bmdma 0xd480 irq 19
ata6: SATA max UDMA/133 cmd 0xd880 ctl 0xd800 bmdma 0xd488 irq 19
ata5: SATA link down (SStatus 0 SControl 310)
ata6: SATA link down (SStatus 0 SControl 310)
ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
ata1.00: ATA-8: WDC WD5002ABYS-01B1B0, 02.03B02, max UDMA/133
ata1.00: 976773168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA
ata1.00: configured for UDMA/133
scsi 0:0:0:0: Direct-Access     ATA      WDC WD5002ABYS-0 02.0 PQ: 0 ANSI: 5
sd 0:0:0:0: [sda] 976773168 512-byte logical blocks: (500 GB/465 GiB)
sd 0:0:0:0: [sda] Write Protect is off
sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
sd 0:0:0:0: [sda] Attached SCSI disk
 sda: sda1 sda2
sd 0:0:0:0: [sda] Attached SCSI disk
ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
ata2.00: ATA-8: WDC WD5002ABYS-01B1B0, 02.03B02, max UDMA/133
ata2.00: 976773168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA
ata2.00: configured for UDMA/133
scsi 1:0:0:0: Direct-Access     ATA      WDC WD5002ABYS-0 02.0 PQ: 0 ANSI: 5
sd 1:0:0:0: [sdb] 976773168 512-byte logical blocks: (500 GB/465 GiB)
sd 1:0:0:0: [sdb] Write Protect is off
sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00
sd 1:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
sd 1:0:0:0: [sdb] Attached SCSI disk
 sdb: sdb1
sd 1:0:0:0: [sdb] Attached SCSI disk
...

Code:
# pvs
  PV         VG   Fmt  Attr PSize   PFree
  /dev/sda2  vg1  lvm2 a--  464.83g 536.00m
  /dev/sdb1  vg2  lvm2 a--  465.76g 191.76g

# vgs
  VG   #PV #LV #SN Attr   VSize   VFree
  vg1    1   5   0 wz--n- 464.83g 536.00m
  vg2    1   4   0 wz--n- 465.76g 191.76g

# lvs
  LV            VG   Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  containers    vg1  -wi-ao 302.00g
  root          vg1  -wi-ao  93.13g
  swap_1        vg1  -wi-ao  11.18g
  vm-100-disk-1 vg1  -wi-ao  50.00g
  vm-101-disk-1 vg1  -wi-ao   8.00g
  vm-100-disk-1 vg2  -wi-ao 200.00g
Code:
# cat storage.cfg
dir: local
        path /var/lib/vz
        content images,iso,vztmpl,backup,rootdir

lvm: vg1
        vgname vg1
        content images

dir: ct1
        path /mnt/containers
        content rootdir

dir: ram
        path /mnt/ramdisk
        content images

lvm: vg2
        vgname vg2
        content images
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!