KVM performance on different operating systems

docent

Renowned Member
Jul 23, 2009
96
1
73
Hi all,
I'm testing some different hypervisors for my future cluster. The servers is based on Intel platform R2308GZ4GC and contains two CPU Xeon E5-2630 6C, 64GB RAM, LSI MegaRAID SAS 9265-8i with BBU, 8x450GB 15K SAS disks. For the testing I used my business software, which actively uses DBMS. In this software I tested three operations: copying, optimization and repairing of database.
I will not go into the details of how they work, but on different OS I've got the next results:
MS Hyper-V 2008 R2 - 1:01; 4:01; 3:35
CentOS 6.3 - 1:07; 4:13; 3:45
Ubuntu 12.04 - 1:14; 4:42; 4:06
Proxmox 2.1 - 2:31; 9:33; 9:01
What should I do to increase performance of Proxmox?
 
Some details about your guest like what OS and what DBMS you are using might be helpful to know.

I started a wiki page with some performance teaks: http://pve.proxmox.com/wiki/Performance_Tweaks
It is not complete but covers most of the basics.

How are you storing your VM disks?
As a file? On LVM? (LVM performs best for me)
Did you use the same storage for all OS tests?

Ideally if you are wanting to compare apples to apples you need to ensure that the VM disk data occupies the same sectors on the same disks in each OS.
Reason is different parts of a mechanical disk have different performance levels.
 
Sorry :)
My guest OS is Windows 2008 R2 with virtio-win-0.1-30, 2 VCPUs and 8GB RAM. Database size is 1.7GB.
My servers have SSD and I installed a different OS on them. The image of guest VM (raw or qcow2 - not matter) is stored on RAID6 or SSD (not matter). Also not matter the cache is 'writeback' or 'none'. I done the tests several times. I know that LVM is better then image, but I tested all OS equally except Hyper-V, of course.
The performance of the storages:
For RAID6:
Code:
CPU BOGOMIPS:      110127.12
REGEX/SECOND:      1046853
HD SIZE:           2300.76 GB (/dev/mapper/pve-data)
SEQUENTIAL READ:   1029.85 MB/sec
RANDOM READ 512K:  23.85 MB/sec (47 IOPS)
RANDOM READ 4K:    0.68 MB/sec (174 IOPS)
SEQUENTIAL WRITE:  462.70 MB/sec
RANDOM WRITE 512K: 156.63 MB/sec (313 IOPS)
RANDOM WRITE 4K:   12.75 MB/sec (3263 IOPS)
AVERAGE SEEK TIME: 10.88 ms
FSYNCS/SECOND:     2704.42
DNS EXT:           147.56 ms
For SSD:
Code:
CPU BOGOMIPS:      110127.12
REGEX/SECOND:      1056314
HD SIZE:           21.44 GB (/dev/mapper/vg_hgsrv1-lv_root)
SEQUENTIAL READ:   320.25 MB/sec
RANDOM READ 512K:  283.15 MB/sec (566 IOPS)
RANDOM READ 4K:    29.51 MB/sec (7553 IOPS)
SEQUENTIAL WRITE:  85.73 MB/sec
RANDOM WRITE 512K: 81.28 MB/sec (162 IOPS)
RANDOM WRITE 4K:   8.16 MB/sec (2089 IOPS)
AVERAGE SEEK TIME: 0.21 ms
FSYNCS/SECOND:     1082.00
DNS EXT:           179.04 ms
 
Sorry :)
My guest OS is Windows 2008 R2 with virtio-win-0.1-30, 2 VCPUs and 8GB RAM. Database size is 1.7GB.
My servers have SSD and I installed a different OS on them. The image of guest VM (raw or qcow2 - not matter) is stored on RAID6 or SSD (not matter). Also not matter the cache is 'writeback' or 'none'. I done the tests several times. I know that LVM is better then image, but I tested all OS equally except Hyper-V, of course.
The performance of the storages:
For RAID6:
Code:
CPU BOGOMIPS:      110127.12
REGEX/SECOND:      1046853
HD SIZE:           2300.76 GB (/dev/mapper/pve-data)
SEQUENTIAL READ:   1029.85 MB/sec
RANDOM READ 512K:  23.85 MB/sec (47 IOPS)
RANDOM READ 4K:    0.68 MB/sec (174 IOPS)
SEQUENTIAL WRITE:  462.70 MB/sec
RANDOM WRITE 512K: 156.63 MB/sec (313 IOPS)
RANDOM WRITE 4K:   12.75 MB/sec (3263 IOPS)
AVERAGE SEEK TIME: 10.88 ms
FSYNCS/SECOND:     2704.42
DNS EXT:           147.56 ms
For SSD:
Code:
CPU BOGOMIPS:      110127.12
REGEX/SECOND:      1056314
HD SIZE:           21.44 GB (/dev/mapper/vg_hgsrv1-lv_root)
SEQUENTIAL READ:   320.25 MB/sec
RANDOM READ 512K:  283.15 MB/sec (566 IOPS)
RANDOM READ 4K:    29.51 MB/sec (7553 IOPS)
SEQUENTIAL WRITE:  85.73 MB/sec
RANDOM WRITE 512K: 81.28 MB/sec (162 IOPS)
RANDOM WRITE 4K:   8.16 MB/sec (2089 IOPS)
AVERAGE SEEK TIME: 0.21 ms
FSYNCS/SECOND:     1082.00
DNS EXT:           179.04 ms
Hi,
you can create an lvm on the SSD and copy (dd) the raw-file to an lv. This lvm-storage can you use on all linux-system, so you can start the vm with the same command (caching and so on) to get comparable values (of course except Hyper V).
The huge difference looks IO-related for me. Can you also test with an cpu-test scenario?
Are the filesystems, where the raw-files are, mounted with the same parameters?

Udo
 
MS Hyper-V 2008 R2 - 1:01; 4:01; 3:35
CentOS 6.3 - 1:07; 4:13; 3:45
Ubuntu 12.04 - 1:14; 4:42; 4:06
Proxmox 2.1 - 2:31; 9:33; 9:01

You're comparing 12cores with 2 vcpu's?
What those times means 1:07, 4:13, 3:45...?
 
Hi,
you can create an lvm on the SSD and copy (dd) the raw-file to an lv. This lvm-storage can you use on all linux-system, so you can start the vm with the same command (caching and so on) to get comparable values (of course except Hyper V).
The huge difference looks IO-related for me. Can you also test with an cpu-test scenario?
Are the filesystems, where the raw-files are, mounted with the same parameters?

Udo
As I wrote before, it's not matter, what type of the storage is and where it's placed, performance is the same.
I'll say by another words: how can I make performance like in CentOS?
I run the same virtual machine from one image on different operating systems and get different performance, but these operation systems use the same kernel.
 
As I wrote before, it's not matter, what type of the storage is and where it's placed, performance is the same.
I'll say by another words: how can I make performance like in CentOS?
I run the same virtual machine from one image on different operating systems and get different performance, but these operation systems use the same kernel.
Hi,
yes, but you answer not the questions:
Do you use the same mount options for the underlying filesystem (where the raw-files are?).
Do you use the same command (with all switches) to start the VM?
There must be a difference between pve / Ubuntu / CentOS and the big question is where!
Esp. CentOS use also the RHEL-Kernel like pve...

Udo
 
Hi,
Hi,
yes, but you answer not the questions:
Do you use the same mount options for the underlying filesystem (where the raw-files are?).
Yes. I use no options. Just mount /dev/mapper/pve-data:
In Proxmox: /dev/mapper/pve-data on /var/lib/vz type ext3 (rw)
In CentOS: /dev/mapper/pve-data on /mnt/data type ext3 (rw)

Do you use the same command (with all switches) to start the VM?
In Proxmox:
Code:
/usr/bin/kvm \
-boot menu=on \
-chardev socket,id=monitor,path=/var/run/qemu-server/101.mon,server,nowait \
-daemonize \
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 \
-device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 \
-device virtio-net-pci,mac=26:9B:5A:71:34:81,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300 \
-drive file=/var/lib/vz/images/101/vm-101-disk-1.raw,if=none,id=drive-virtio0,aio=native,cache=none \
-drive if=none,id=drive-ide2,media=cdrom,aio=native \
-id 101 \
-k en-us \
-localtime \
-m 8192 \
-mon chardev=monitor,mode=readline \
-name hq-sr-a3 \
-netdev type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge,vhost=on \
-nodefaults \
-no-hpet \
-no-kvm-pit-reinjection \
-pidfile /var/run/qemu-server/101.pid \
-rtc-td-hack \
-smp sockets=1,cores=2 \
-usbdevice tablet \
-vga std \
-vnc unix:/var/run/qemu-server/101.vnc,x509,password
In CentOS:
Code:
/usr/libexec/qemu-kvm \
-chardev pty,id=charserial0 \
-chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/vm10.monitor,server,nowait \
-device isa-serial,chardev=charserial0,id=serial0 \
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
-device usb-tablet,id=input0 \
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 \
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:7c:89:54,bus=pci.0,addr=0x3 \
-drive file=/mnt/data/images/101/vm-101-disk-1.raw,if=none,id=drive-virtio-disk0,format=raw,cache=none \
-enable-kvm \
-m 8192 \
-mon chardev=charmonitor,id=monitor,mode=control \
-M rhel6.3.0 \
-name vm10 \
-netdev tap,fd=22,id=hostnet0,vhost=on,vhostfd=23 \
-nodefaults \
-nodefconfig \
-no-shutdown \
-rtc base=localtime,driftfix=slew \
-S \
-smp 2,sockets=2,cores=1,threads=1 \
-uuid 5c9a6368-d95b-ac75-c12e-e6e069b1cad8 \
-vga std \
-vnc 0.0.0.0:0
There must be a difference between pve / Ubuntu / CentOS and the big question is where!
Esp. CentOS use also the RHEL-Kernel like pve...
I know and tell...
 
Hi,
one idea - do you have the same symtom on a 1CPU (no NUMA architecture) system?
Unfortunately, yes, I do.
In CentOS the test takes 1:10, bun in Proxmox it takes 2:35.
 
Last edited:
Hi,
can you post "cat /proc/mounts | grep mnt" and "cat /proc/mounts | grep /var/lib/vz"?
One thing: on one system you use 1 socket and 2 cores and on the other system 2 sockets and 1 core. On a linux system it's the same, but on windows-software I'm not sure!

Udo
 
Hi,
can you post "cat /proc/mounts | grep mnt" and "cat /proc/mounts | grep /var/lib/vz"?
In Proxmox /dev/mapper/pve-data /var/lib/vz ext3 rw,relatime,errors=continue,barrier=0,data=ordered 0 0
In CentOS /dev/mapper/pve-data /mnt/data ext3 rw,seclabel,relatime,errors=continue,barrier=1,data=ordered 0 0
One thing: on one system you use 1 socket and 2 cores and on the other system 2 sockets and 1 core. On a linux system it's the same, but on windows-software I'm not sure!
I've tested with one VCPU, results are the same.
 
I use my own business software, which actively uses DBMS Firebird. The size of database is 1.7GB.

If you have enough memory to handle the database in memory buffer, slow disk read shouldn't impact you.
But you really need to check the write speed, you need fast write ack from your raid controller.

Are you sure that writeback is enable (forced) on your raid controller ?
Verify that your disks are in writeback in proxmox

"sdparm --get WCE /dev/sda "

Also, never use raid6 for a database, (or raid5), really. go for raid10.
 
also, use a real benchmark tool, like "fio":


random read: (iops)
fio --filename=/dev/[device] --direct=1 --rw=randread --bs=4k --size=1G --iodepth=100 --runtime=120 --group_reporting --name=file1 --ioengine=libaio

random read: (iops)
fio --filename=/dev/[device] --direct=1 --rw=randwrite --bs=4k --size=1G --iodepth=100 --runtime=120 --group_reporting --name=file1 --ioengine=libaio



seq read: (bandwith)
fio --filename=/dev/[device] --direct=1 --rw=read --bs=4M --size=1G --iodepth=100 --runtime=120 --group_reporting --name=file1 --ioengine=libaio

random read: (bandwith)
fio --filename=/dev/[device] --direct=1 --rw=write --bs=4M --size=1G --iodepth=100 --runtime=120 --group_reporting --name=file1 --ioengine=libaio
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!