Poor performance in KVM VMs

aballa

New Member
Jan 19, 2013
5
0
1
I have a Proliant server with 9 Windows XP VMs accessed remotely (only 3 or 4 active at one time) and 1 XP VM with a simple db server.
Server has two Broadcom Gb adapter in bonding rr.
The performances are unsatisfactory.

Using iperf to check the network performances I found that:

- iperf server on host <--> client running on a VM (#301) with virtio drivers:
>iperf -c 192.168.23.201
------------------------------------------------------------
Client connecting to 192.168.23.201, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.23.11 port 1068 connected with 192.168.23.201 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 145 MBytes 121 Mbits/sec

- iperf server on the db server (#1001) <--> client running on a VM (#301) both with virtio Gb drivers:
>iperf -c 192.168.23.252
------------------------------------------------------------
Client connecting to 192.168.23.252, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.23.11 port 1069 connected with 192.168.23.252 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.2 sec 28.2 MBytes 23.2 Mbits/sec

I added an openvz container (#101) with ubuntu 12.04 and tested this VM too
1) with iperf server on host:
root@test:~# iperf -c 192.168.23.201
------------------------------------------------------------
Client connecting to 192.168.23.201, TCP port 5001
TCP window size: 23.8 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.23.253 port 52040 connected with 192.168.23.201 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.44 GBytes 1.24 Gbits/sec

2) with iperf server on the db server (#1001):
root@test:~# iperf -c 192.168.23.252
------------------------------------------------------------
Client connecting to 192.168.23.252, TCP port 5001
TCP window size: 23.8 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.23.253 port 42116 connected with 192.168.23.252 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 116 MBytes 97.3 Mbits/sec

I have googled around and I have read a lot of messages in this forum but I cannot find anything useful.

Here follows some infos on my configuration:

root@server1:~# pveperf
CPU BOGOMIPS: 39897.72
REGEX/SECOND: 866286
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 128.48 MB/sec
AVERAGE SEEK TIME: 10.59 ms
FSYNCS/SECOND: 594.60
DNS EXT: 115.47 ms
DNS INT: 86.43 ms (mydomain.local)

root@server1:~# pveversion -v
pve-manager: 2.3-13 (pve-manager/2.3/7946f1f1)
running kernel: 2.6.32-17-pve
proxmox-ve-2.6.32: 2.3-96
pve-kernel-2.6.32-19-pve: 2.6.32-96
pve-kernel-2.6.32-17-pve: 2.6.32-83
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-20
pve-firmware: 1.0-21
libpve-common-perl: 1.0-49
libpve-access-control: 1.0-26
libpve-storage-perl: 2.3-7
vncterm: 1.0-4
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-10
ksm-control-daemon: 1.1-1

root@server1:~# lspci|grep Eth
02:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5722 Gigabit Ethernet PCI Express
04:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5722 Gigabit Ethernet PCI Express

root@server1:~# cat /etc/network/interfaces
# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

auto bond0
iface bond0 inet manual
slaves eth0 eth1
bond_miimon 100
bond_mode balance-rr

auto vmbr0
iface vmbr0 inet static
address 192.168.23.201
netmask 255.255.255.0
gateway 192.168.23.1
bridge_ports bond0
bridge_stp off
bridge_fd 0
bridge_maxage 0
bridge_ageing 0
bridge_maxwait 0

root@server1:~# cat /etc/pve/qemu-server/1001.conf
balloon: 1024
boot: c
bootdisk: virtio0
cores: 1
cpuunits: 100000
ide2: local:iso/virtio-win-0.1-81.iso,media=cdrom,size=72406K
memory: 4096
name: VMXPSVR
net0: virtio=6A:E5:37:83:10:E2,bridge=vmbr0
onboot: 1
ostype: wxp
sockets: 1
vga: cirrus
virtio0: local:1001/vm-1001-disk-1.raw,cache=writeback,size=32G
virtio2: local:1001/vm-1001-disk-2.raw,size=6G

root@server1:~# cat /etc/pve/qemu-server/301.conf
balloon: 1024
boot: c
bootdisk: ide0
cores: 1
cpuunits: 25000
ide2: local:iso/virtio-win-0.1-81.iso,media=cdrom,size=72406K
memory: 4096
name: VM-U01
net0: virtio=CA:04:FC:7A:81:1C,bridge=vmbr0
onboot: 1
ostype: wxp
sockets: 1
vga: cirrus
virtio0: local:301/vm-301-disk-1.raw,cache=writeback,size=32G

Thank you in advance.
 
Update: debian 7 amd64 in a KVM VM scored 2Gbps, with a Windows 8 guest I got 270 Mbps with E1000 driver and 950 Mbps with virtio driver.
 
Did you use Virtio and the Windows registry hack as documented here:
https://pve.proxmox.com/wiki/Paravirtualized_Network_Drivers_for_Windows

?

Hello Brad, thank you for the feedback :)
Yes I've installed Virtio driver 1.81 from Fedora web site and the registry hack.
I've installed a Win 7 pro VM amd64 just to check if I can get near wire speed in it: I got 1,11 Gbps so for what I can see it seems that virtio network driver for WinXP 32bit systems doesn't perform as aspected.
Do you have any XP VM running network at Gb speed?
 
Last edited:
Sorry, no, I run a very homogenous environment, Linux only. I just remembered running across that wiki in the past and wasn't sure if the registry hack helped or not, sounds like it doesn't.

The only other thing I might suggest is trying the q35 machine type, no clue if that would actually help or not though.
 
Sorry, no, I run a very homogenous environment, Linux only. I just remembered running across that wiki in the past and wasn't sure if the registry hack helped or not, sounds like it doesn't.

The only other thing I might suggest is trying the q35 machine type, no clue if that would actually help or not though.

Humm I think it would be more practical to upgrade all machine to Win7 :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!