Network Performance on 1.6

pashdown

New Member
Oct 6, 2010
1
0
1
I thought you might be interested in this email from the KVM mailing list. I have verified the results on a Ubuntu guest with ethtool and offloading is not being used, and as a result my ethernet performance is about 35% slower than native.
Not sure what userspace you are using, but you are probably not getting
any of the useful offload features set. Checking "ethtool -k $ETH"
in the guest will verify that.

Try changing this:

-net nic,macaddr=52:54:00:35:11:f1,vlan=0,model=virtio,name=virtio.0 \
-net tap,fd=51,vlan=0,name=tap.0

to use newer syntax:

-netdev type=tap,id=netdev0
-device virtio-net-pci,mac=52:54:00:35:11:f1,netdev=netdev0

With just a 1Gb link, you should see line rate from guest via virtio.

thanks,
-chris
 
Did you test the new -netdev syntax on your Proxmox hosted machines? I'm wondering if this only applies to the upstream QEMU.
 
I believe this would require modifying QemuServer.pm to call kvm with the new syntax...
Also would you need to change the netdev0 to be a unique device per guest?

There is also the new vhost-net support in kernel 2.6.34 and the latest kvm which promises to improve latency and throughput for virtio considerably, not sure if/when proxmox will be supporting this.
 
I had a chance to attend the Linux Foundation summit this week here in NYC. Chris Wright of the Red Hat KVM dev team and author of the previously quoted email was there and talked quite a bit about the benefits of the the new vhost-net architecture. The performance graphs displaying the gains were impressive. I'm looking forward to implementing it and the -netdev syntax on our Proxmox cluster.
 
I have produced a very quick and crude hack up of QemuServer.pm which uses -netdev syntax for virtio devices...

Download from http://www.firenzee.com/QemuServer.pm

place it in /usr/share/perl5/PVE/QemuServer.pm

It enables all the offload options:

quarte ~ # ethtool -k eth0
Offload parameters for eth0:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp segmentation offload: on
udp fragmentation offload: on
generic segmentation offload: on

Caveats:

as this is a quick dirty hack, i don't think it will work with more than 1 nic...
I will run some benchmarks on it later and see how it performs.
 
Thanks for posting that mod, bert64. I'm seeing some very nice improvements to my KVM guests that utilize a lot of bandwidth. The ones using NFS mounts are showing the most improvement.

I'm testing on two of our Proxmox 1.5 hosts.

Linux guest before

Code:
$ sudo ethtool -k eth0
Offload parameters for eth0:
rx-checksumming: off
tx-checksumming: off
scatter-gather: off
tcp-segmentation-offload: off
udp-fragmentation-offload: off
generic-segmentation-offload: off
generic-receive-offload: off
large-receive-offload: off

After

Code:
$ sudo ethtool -k eth0
Offload parameters for eth0:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: on
udp-fragmentation-offload: on
generic-segmentation-offload: on
generic-receive-offload: off
large-receive-offload: off

The Windows 2008 and 7 guest also show big file transfer improvements.
 
You mean linux host not guest ?

Thanks for posting that mod, bert64. I'm seeing some very nice improvements to my KVM guests that utilize a lot of bandwidth. The ones using NFS mounts are showing the most improvement.

I'm testing on two of our Proxmox 1.5 hosts.

Linux guest before

Code:
$ sudo ethtool -k eth0
Offload parameters for eth0:
rx-checksumming: off
tx-checksumming: off
scatter-gather: off
tcp-segmentation-offload: off
udp-fragmentation-offload: off
generic-segmentation-offload: off
generic-receive-offload: off
large-receive-offload: off

After

Code:
$ sudo ethtool -k eth0
Offload parameters for eth0:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: on
udp-fragmentation-offload: on
generic-segmentation-offload: on
generic-receive-offload: off
large-receive-offload: off

The Windows 2008 and 7 guest also show big file transfer improvements.
 
I'm testing the QemuServer.pm mod on two of our Proxmox 1.6 hosts running 1.5 KSM kernel (we are over-commit about 25%).

UPDATE: Last night, our nightly backups (100 Gb/55 min collective throughput from 15 guests) ran without the hosts bogging down. Previously, the guests would timeout to pings and unresponsive to our Zabbix monitoring system.
 
A few quick benchmarks:

Guest images are running gentoo linux with kernel 2.6.34-hardened-r6 and using virtio drivers for disk and net.
Host is an hp proliant dl140, running a single xeon E5310 cpu and 20gb ram. Each VM is configured with 1 cpu and 512mb ram.
Target system is another identical dl140, running iperf natively.
Both servers use the built in gigabit nics connected to a cisco catalyst 4006 switch.

Without QemuServer.pm patch:

# iperf -c x.x.x.x
------------------------------------------------------------
Client connecting to x.x.x.x, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local x.x.x.x port 51198 connected with x.x.x.x port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 474 MBytes 397 Mbits/sec

With QemuServer.pm patch:


# iperf -c x.x.x.x
------------------------------------------------------------
Client connecting to x.x.x.x, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local x.x.x.x port 52898 connected with x.x.x.x port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.09 GBytes 933 Mbits/sec
 
I have hacked up further patches to implement vhost-net...

A modified QemuServer.pm is available at http://www.firenzee.com/vhostnet/QemuServer.pm

A patched package for qemu-kvm is available at http://www.firenzee.com/vhostnet/pve-qemu-kvm_0.12.5-2_amd64.deb
this version integrates the patches from fedora core 13 which backports vhost-net to kvm 0.12.5 (and a few other misc changes, it was easier to apply the whole patchset than split it out)

And the sourcecode for the above is available at http://www.firenzee.com/vhostnet/pve-qemu-kvm_2010-10-08-vhostnet.tar.bz2
note you need to install pve-headers-2.6.35-1-pve as the vhost-net support requires kernel headers >2.6.34

You also need to be running a 2.6.34 or above kernel with vhost-net support either compiled in or as a module, the 2.6.35 kernel supplied by proxmox includes this support.

I have a windows 2003 and several linux (gentoo, kernel 2.6.34-hardened-r4) machines booted up with it... this setup seems able to compete with native performance over the nics in my hp servers.

Any feedback is welcome
 
Hi bert64,

after applying the path mit virtual machine failed to start. Please help...

error:

Oct 21 21:03:43 st-kvm-02 qm[4473]: VM 101 start failed: command '/usr/bin/kvm -monitor unix:/var/run/qemu-server/101.mon,server,nowait -vnc unix:/var/run/qemu-server/101.vnc,password -pidfile /var/run/qemu-server/101.pid -daemonize -usbdevice tablet -name ST-BKU-01 -smp sockets=1,cores=2 -nodefaults -boot menu=on -vga cirrus -tdf -k de -drive if=ide,index=2,media=cdrom -drive file=/dev/stor03-arch01/vm-101-disk-1,if=virtio,index=0,cache=none,boot=on -drive file=/dev/stor03-arch01/vm-101-disk-2,if=virtio,index=1 -m 2048 -netdev type=tap,id=vmtab101i0,ifname=vmtab101i0,script=/var/lib/qemu-server/bridge-vlan,vhost=on -device virtio-net-pci,mac=1E:D5:5B:E0:9C:96,netdev=vmtab101i0' failed with exit code 1

what have i done wrong ?
 
Did you install both the QemuServer.pm and the updated qemu-kvm deb package?
What kernel version are you using, and is the vhost-net module loaded?
 
Ah,

vhost-net wasn´t loaded. Now it´s working but the iperf-perfomance is now about 906 Mbits/sec. With the patched QemuServer.pm from the first post I had about 924 Mbits/sec. Do you know if I could raise the performance with my teamed host-nics (4 nics 802.3ad) ?
 
Strange, performance shouldn't go down with vhost-net enabled...
I guess bonded nics should improve performance, tho i haven't tried... i was able to get about 3.8Gbit/sec testing iperf between host and guest, and performance over the nic was within a margin for error with -netdev both with and without vhost-net... Tho i only have a single gigabit nic...

vhost-net is also supposed to reduce latency, have you noticed any improvements there?
 
OK, I just uploaded new version to the pvetest repository:

pve-qemu-kvm_0.13.0-2_amd64.deb
qemu-server_1.1-23_amd64.deb

That new qemu-server package loads vhost-net module on startup, uses the new -device syntax, and use vhost=on if available. Please can you test?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!