Tuning KVM

BiagioParuolo

Active Member
Apr 29, 2009
256
0
36
Salerno - Italy
From KVM site:

CPU Performance

Modern processors come with a wide variety of performance enhancing features such as streaming instructions sets (sse) and other performance-enhancing instructions. These features vary from processor to processor.
QEMU and KVM default to a compatible subset of cpu features, so that if you change your host processor, or perform a live migration, the guest will see its cpu features unchanged. This is great for compatibility but comes at a performance cost.
To pass all available host processor features to the guest, use the command line switch
qemu -cpu host
if you wish to retain compatibility, you can expose selected features to your guest. If all your hosts have these features, compatibility is retained:
qemu -cpu qemu64,+ssse3,+sse4.1,+sse4.2,+x2apic
[edit]
Networking

QEMU defaults to user-mode networking (slirp), which is available without prior setup and without administrative privileges on the host. It is also unfortunately very slow. To get high performance networking, switch to a bridged setup via the -net tap command line switches.
qemu -net nic,model=virtio,mac=... -net tap,ifname=...
QEMU also defaults to the RTL8139 network interface card (NIC) model. Again this card is compatible with most guests, but does not offer the best performance. If your guest supports it, switch to the virtio model:
qemu -net nic,model=virtio,mac=... -net tap,ifname=...
[edit]
Storage

QEMU supports a wide variety for storage formats and back-ends. Easiest to use are the raw and qcow2 formats, but for the best performance it is best to use a raw partition. You can create either a logical volume or a partition and assign it to the guest:
qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,if=virtio
QEMU also supports a wide variety of caching modes. Writeback is useful for testing but does not offer storage guarantees. Writethrough (the default) is safer, and relies on the host cache. If you're using raw volumes or partitions, it is best to avoid the cache completely, which reduces data copies and bus traffic:
qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,if=virtio
As with networking, QEMU supports several storage interfaces. The default, IDE, is highly supported by guests but may be slow, especially with disk arrays. If your guest supports it, use the virtio interface:
qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,if=virtio

**************

Where Do I setup qemu -cpu host?

Thanks
 
From KVM site:

CPU Performance

Modern processors come with a wide variety of performance enhancing features such as streaming instructions sets (sse) and other performance-enhancing instructions. These features vary from processor to processor.
QEMU and KVM default to a compatible subset of cpu features, so that if you change your host processor, or perform a live migration, the guest will see its cpu features unchanged. This is great for compatibility but comes at a performance cost.
To pass all available host processor features to the guest, use the command line switch
qemu -cpu host
if you wish to retain compatibility, you can expose selected features to your guest. If all your hosts have these features, compatibility is retained:
qemu -cpu qemu64,+ssse3,+sse4.1,+sse4.2,+x2apic
[edit]
Networking

QEMU defaults to user-mode networking (slirp), which is available without prior setup and without administrative privileges on the host. It is also unfortunately very slow. To get high performance networking, switch to a bridged setup via the -net tap command line switches.
qemu -net nic,model=virtio,mac=... -net tap,ifname=...
QEMU also defaults to the RTL8139 network interface card (NIC) model. Again this card is compatible with most guests, but does not offer the best performance. If your guest supports it, switch to the virtio model:
qemu -net nic,model=virtio,mac=... -net tap,ifname=...
[edit]
Storage

QEMU supports a wide variety for storage formats and back-ends. Easiest to use are the raw and qcow2 formats, but for the best performance it is best to use a raw partition. You can create either a logical volume or a partition and assign it to the guest:
qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,if=virtio
QEMU also supports a wide variety of caching modes. Writeback is useful for testing but does not offer storage guarantees. Writethrough (the default) is safer, and relies on the host cache. If you're using raw volumes or partitions, it is best to avoid the cache completely, which reduces data copies and bus traffic:
qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,if=virtio
As with networking, QEMU supports several storage interfaces. The default, IDE, is highly supported by guests but may be slow, especially with disk arrays. If your guest supports it, use the virtio interface:
qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,if=virtio

**************

Where Do I setup qemu -cpu host?

Thanks

see 'man qm'. you need to pass such options via the 'args' option in the config file (/etc/qemu-server/VMID.conf)

do not forget to stop and start the guest again (reboot does not help).
 
Great mate, thanks.

I was wondering would softRaid with cache=writeback behave like a VM with cache=none and hardware Raid with writeback and with no BBU?

--edit

I've just tested that cache=none is best for software raids. If you use hardware raid with writeback settings your good with defaults (writethrough). FYI i've tested that on Adaptec 2405
 
Last edited:
see 'man qm'. you need to pass such options via the 'args' option in the config file (/etc/qemu-server/VMID.conf)

do not forget to stop and start the guest again (reboot does not help).
Is this still the case? I do not see a VMID.conf file in Proxmox VE 1.8, and the manpage for qm doesn't indicate anything about host CPU passthrough.
 
Ah! I see now... you meant literally a [vmid].conf naming convention. I understand now, sorry about that.

Is there a syntax for the "args" option documented somewhere? Would it be just a line with ARGS="-cpu host" on it, or something different?
 
Last edited by a moderator:
Ah! I see now... you meant literally a [vmid].conf naming convention. I understand now, sorry about that.

Is there a syntax for the "args" option documented somewhere? Would it be just a line with ARGS="-cpu host" on it, or something different?
Hi,
the args-options gives the commands directly to kvm (see "ps aux | grep kvm" to look at the kvm process).
With
Code:
kvm -h
 kvm -cpu ?
you will see the posibilities.

Udo
 
Hi Udo,

Thanks for your help with my question.

My confusion is over the syntax, not the options that need to be passed. For example, I have a [vmid].conf file in a format like this:
Code:
name: VirtualServerA
ide2: [storagenameA]:iso/ubuntu-11.04-server-amd64.iso,media=cdrom
bootdisk: virtio0
virtio0: [storagenameB]:101/vm-101-disk-1.raw
ostype: l26
memory: 2048
sockets: 1
onboot: 0
cores: 4
vlan1: virtio=16:C8:9E:47:35:A0

A new line needs to be created to pass arguments... what syntax should that line have? Something like:
Code:
cpu: host

(or)
args: -cpu host

(or)
options: ARGS="-cpu host"

I hope you see my question?

Also, if the host CPU is being exposed by KVM, are the sockets: and cores: declarations still necessary?
 
Hi Udo,

Thanks for your help with my question.

My confusion is over the syntax, not the options that need to be passed. For example, I have a [vmid].conf file in a format like this:
Code:
name: VirtualServerA
ide2: [storagenameA]:iso/ubuntu-11.04-server-amd64.iso,media=cdrom
bootdisk: virtio0
virtio0: [storagenameB]:101/vm-101-disk-1.raw
ostype: l26
memory: 2048
sockets: 1
onboot: 0
cores: 4
vlan1: virtio=16:C8:9E:47:35:A0

A new line needs to be created to pass arguments... what syntax should that line have? Something like:
Code:
cpu: host

(or)
args: -cpu host

(or)
options: ARGS="-cpu host"
Code:
args: -cpu host
will work fine.
I hope you see my question?

Also, if the host CPU is being exposed by KVM, are the sockets: and cores: declarations still necessary?
this is one big advantage again vmware - you have not only cpu, you can also use sockets! For linux-guests it's the same, but for licensed programs it's perhaps a difference.
You can use an win-XP with 2-CPU and 4 cores: e.g. 8 cpus (or better say cores). But of course only possible if your hardware have enough cores.

Udo
 
Last edited:
Thanks for the information! I made the changes and the correct CPU identification and flags are now being passed to the guests. Vielen dank!
 
The Guest will be 2 times faster with -cpu host feature.
I've tested it on E5540
 
I've just tested on Xeon E3 1240, there is a significant increase in performance.

The ability to set -cpu host would be a great addition to the VM Configuration page, as would the ability to set cache=none on storage targets.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!