Low network performance in KVM guests

P

phat

Guest
Hi.

First, let me apologize for making another thread about this issue.
I've read through everything I could find on this forum and it seems I am not alone.
But none of the suggested solutions I've read helped me and I am about to put my head thru the wall in frustation.

As the title suggest, this is about bad network performance in my KVM's.

Let me draw you a picture;

host <-> kvm guest running debian latest stable 6.0 x64
=======================================================
host# iperf -c 192.168.1.2 -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.1.2, TCP port 5001
TCP window size: 19.6 KByte (default)
------------------------------------------------------------
[ 5] local 192.168.1.1 port 47530 connected with 192.168.1.2 port 5001
[ 4] local 192.168.1.1 port 5001 connected with 192.168.1.2 port 52194
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.2 sec 2.45 GBytes 2.06 Gbits/sec
[ 4] 0.0-10.3 sec 287 MBytes 234 Mbits/sec


host <-> openvz guest running debian 6.0 (standard template)
============================================================
# iperf -c 192.168.1.20 -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.1.20, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.1.1 port 40428 connected with 192.168.1.20 port 5001
[ 5] local 192.168.1.1 port 5001 connected with 192.168.1.20 port 47610
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 1.25 GBytes 1.07 Gbits/sec
[ 5] 0.0-10.0 sec 844 MBytes 707 Mbits/sec

host <-> kvm guest running windows 2008 R2 x64
===============================================
# iperf -c 192.168.1.22 -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.1.22, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.1.1 port 57958 connected with 192.168.1.22 port 5001
[ 5] local 192.168.1.1 port 5001 connected with 192.168.1.22 port 49521
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 459 MBytes 385 Mbits/sec
[ 5] 0.0-14.0 sec 310 MBytes 186 Mbits/sec

host local test
===============
Client connecting to 127.0.0.1, TCP port 5001
TCP window size: 49.7 KByte (default)
------------------------------------------------------------
[ 6] local 127.0.0.1 port 57935 connected with 127.0.0.1 port 5001
[ 5] local 127.0.0.1 port 5001 connected with 127.0.0.1 port 57935
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 4.94 GBytes 4.24 Gbits/sec
[ 6] 0.0-10.0 sec 6.54 GBytes 5.62 Gbits/sec


As you can clearly see the performance on the KVM's are very bad and as a result I can't utilize my gbit network properly.
The worst is obviously the windows guest.
The KVM's are using virtio on both HD and NET.
Tried e1000 which made it worse.
Tried EVERY single virtio driver including latest 1.4.0 directly from redhat.

Now some facts and config files;

# pveversion -v
pve-manager: 1.9-26 (pve-manager/1.9/6567)
running kernel: 2.6.32-6-pve
proxmox-ve-2.6.32: 1.9-50
pve-kernel-2.6.32-4-pve: 2.6.32-33
pve-kernel-2.6.32-6-pve: 2.6.32-50
qemu-server: 1.1-32
pve-firmware: 1.0-14
libpve-storage-perl: 1.0-19
vncterm: 0.9-2
vzctl: 3.0.29-3pve1
vzdump: 1.2-16
vzprocps: 2.0.11-2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.15.0-1
ksm-control-daemon: 1.0-6

# pveperf
CPU BOGOMIPS: 12003.20
REGEX/SECOND: 715447
HD SIZE: 36.67 GB (/dev/mapper/pve-root)
BUFFERED READS: 102.04 MB/sec
AVERAGE SEEK TIME: 11.39 ms
FSYNCS/SECOND: 766.28
DNS EXT: 42.11 ms
DNS INT: 32.32 ms

# lspci
00:00.0 Host bridge: Intel Corporation 4 Series Chipset DRAM Controller (rev 03)
00:01.0 PCI bridge: Intel Corporation 4 Series Chipset PCI Express Root Port (rev 03)
00:1a.0 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #4
00:1a.1 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #5
00:1a.2 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #6
00:1a.7 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB2 EHCI Controller #2
00:1b.0 Audio device: Intel Corporation 82801JI (ICH10 Family) HD Audio Controller
00:1c.0 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 1
00:1c.3 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 4
00:1c.4 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 5
00:1c.5 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 6
00:1d.0 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #1
00:1d.1 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #2
00:1d.2 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #3
00:1d.7 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB2 EHCI Controller #1
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 90)
00:1f.0 ISA bridge: Intel Corporation 82801JIB (ICH10) LPC Interface Controller
00:1f.2 SATA controller: Intel Corporation 82801JI (ICH10 Family) SATA AHCI Controller
00:1f.3 SMBus: Intel Corporation 82801JI (ICH10 Family) SMBus Controller
01:00.0 VGA compatible controller: ATI Technologies Inc RV530 [Radeon X1600]
01:00.1 Display controller: ATI Technologies Inc RV530 [Radeon X1600] (Secondary)
03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01)
04:00.0 IDE interface: JMicron Technology Corp. JMB368 IDE controller
05:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 02)
06:00.0 Ethernet controller: Intel Corporation 82541PI Gigabit Ethernet Controller (rev 05)
06:01.0 Ethernet controller: Intel Corporation 82541PI Gigabit Ethernet Controller (rev 05)

# cat /etc/qemu-server/104.conf
name: windows2008kvm
bootdisk: virtio0
ostype: w2k8
virtio0: local:104/vm-104-disk-1.raw
memory: 2048
onboot: 1
sockets: 1
cores: 2
vlan0: virtio=9A:51:4A:B6:07:03

# cat /etc/qemu-server/107.conf
name: debian6kvm
bootdisk: virtio0
ostype: l26
virtio0: local:107/vm-107-disk-1.raw
memory: 2048
sockets: 1
onboot: 1
cores: 2
virtio1: /dev/sdb
virtio2: /dev/sdc
vlan0: virtio=7E:59:1D:CF:A1:22


If anyone can help me solve this I'll give you a medal! :)
 
Hi,
i guess cpu-power is the limited factor. Perhaps also the real network-card - which one do you use? the rtl or intel? Perhaps you can try with an dummy-bridge (host internal traffic only).

Short test on one server:
host to squeeze-kvm
Code:
# iperf -c 172.20.1.110 -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 172.20.1.110, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  5] local 172.20.1.12 port 33605 connected with 172.20.1.110 port 5001
[  4] local 172.20.1.12 port 5001 connected with 172.20.1.110 port 41373
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-10.0 sec  1.33 GBytes  1.14 Gbits/sec
[  4]  0.0-10.0 sec    634 MBytes    531 Mbits/sec
host to xp-vm (e1000 nic):
Code:
# iperf -c 172.20.1.152 -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 172.20.1.152, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  4] local 172.20.1.12 port 43186 connected with 172.20.1.152 port 5001
[  5] local 172.20.1.12 port 5001 connected with 172.20.1.152 port 3009
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  1.28 GBytes  1.10 Gbits/sec
[  5]  0.0-10.2 sec  47.7 MBytes  39.1 Mbits/sec
performance (4core 3.6GHz AMD 965):
Code:
# pveperf /var/lib/vz
CPU BOGOMIPS:      29475.49
REGEX/SECOND:      1204996
HD SIZE:           543.34 GB (/dev/mapper/pve-data)
BUFFERED READS:    410.43 MB/sec
AVERAGE SEEK TIME: 5.74 ms
FSYNCS/SECOND:     4285.98
DNS EXT:           64.60 ms
DNS INT:           0.47 ms
Udo
 
Thank you very much for your reply Udo.
I tried a dummy bridge (again) and the results are about the same.
My focus right now is on the windows guest since that's the worst, and I am beginning to think there is something seriously wrong with the OS itself.

Can you tell me which intel/virtio driver you are using in your guests?
Driver version and date.

Installing winxp into a new guest now to test for a comparison.

EDIT: Intel cards are used for LAN, the Realtek is only used for some xDSL WAN connections that's not relevant right now :p
 
Last edited by a moderator:
I'm running a Server 2008 R2 x64 system without any issues... i've run iperf:

# iperf -c ***.***.***.*** -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to ***.***.***.***, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.100.80 port 45540 connected with ***.***.***.*** port 5001
[ 5] local 192.168.100.80 port 5001 connected with ***.***.***.*** port 50640
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 85.2 MBytes 71.4 Mbits/sec
[ 5] 0.0-10.1 sec 38.0 MBytes 31.6 Mbits/sec

Could it be that there is a network config on your host that causing slow downs?

I have 2 NICs but only have one connected currently...

According to your lspci readout, you have 4 NICs?
 
I'm running a Server 2008 R2 x64 system without any issues... i've run iperf:

# iperf -c ***.***.***.*** -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to ***.***.***.***, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.100.80 port 45540 connected with ***.***.***.*** port 5001
[ 5] local 192.168.100.80 port 5001 connected with ***.***.***.*** port 50640
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 85.2 MBytes 71.4 Mbits/sec
[ 5] 0.0-10.1 sec 38.0 MBytes 31.6 Mbits/sec

Could it be that there is a network config on your host that causing slow downs?

I have 2 NICs but only have one connected currently...

According to your lspci readout, you have 4 NICs?

No offence but your performance is below 100mbit, which is even worse then mine.
I am guessing you have a 100mbit NIC in your guest?
Either way I fail to see how your case is relevant to mine :)

And yes I have 4 NICs, 2 Intel and 2 Realtek. 3 of them are connected to WAN routers and the last one (Intel) connected to LAN gbit switch.
 
No offence but your performance is below 100mbit, which is even worse then mine.
I am guessing you have a 100mbit NIC in your guest?

Ah, yes iI have and thanks for pointing that out - i'd missed that! Just reconfiguring atm... will redo iperf and post back...

Either way I fail to see how your case is relevant to mine :)

We're both running server 2008 r2 x64 on a Proxmox host - youw as saying that you thought it might have been an OS issue... was just saying that mine was working ok :)
 
Ok - so changed the card for a e1000...

host <> Windows Server 2008 R2 x64

------------------------------------------------------------
Client connecting to ***.***.***.***, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 5] local 192.168.100.81 port 44828 connected with ***.***.***.*** port 5001
[ 4] local 192.168.100.81 port 5001 connected with ***.***.***.*** port 49198
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 632 MBytes 530 Mbits/sec
[ 4] 0.0-10.0 sec 178 MBytes 149 Mbits/sec
 
Ah, yes iI have and thanks for pointing that out - i'd missed that! Just reconfiguring atm... will redo iperf and post back...



We're both running server 2008 r2 x64 on a Proxmox host - youw as saying that you thought it might have been an OS issue... was just saying that mine was working ok :)

I might have been unclear, my english ain't the best, what I ment to say was in the lines of "I don't find a test with 100mbit relevant to my problem with gbit", sorry for that :)
But ye thanks for doing another one with 1gbit NIC, can you tell me which e1000 driver you use inside the win2008 guest?
 
Thank you very much for your reply Udo.
I tried a dummy bridge (again) and the results are about the same.
My focus right now is on the windows guest since that's the worst, and I am beginning to think there is something seriously wrong with the OS itself.

Can you tell me which intel/virtio driver you are using in your guests?
Driver version and date.

Installing winxp into a new guest now to test for a comparison.

EDIT: Intel cards are used for LAN, the Realtek is only used for some xDSL WAN connections that's not relevant right now :p
Hi,
i use inside the xp-vm an e1000 driver 14.0.40.0 from intel but with the network-tuning from the kvm-webpage: http://www.linux-kvm.org/page/WindowsGuestDrivers/kvmnet/registry
With the virtio-network-driver on windows i had some issues (last try some weeks ago) - e1000 is perhaps not so fast but more reliable.

Udo
 
But ye thanks for doing another one with 1gbit NIC, can you tell me which e1000 driver you use inside the win2008 guest?
Just the default one that installs with Server 2008... I've not installed any extra or run any updates on it...

Provider: Microsoft
Date: 28/05/2008
Version: 8.4.1.0
 
Just did some test, using the latest kernel from today for 1.9 - got great performance:

Win2008r2 with virtio (fedora) to host (4 year old cpu):

Code:
C:\iperf-2.0.5-cygwin>iperf.exe -c proxmox-host
------------------------------------------------------------
Client connecting to mits2, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.122 port 52215 connected with 192.168.2.103 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.1 sec 2.79 GBytes 2.38 Gbits/sec

Same guest, Win2008r2 with virtio to a another server in the LAN:

Code:
C:\iperf-2.0.5-cygwin>iperf.exe -c 192.168.7.61
------------------------------------------------------------
Client connecting to 192.168.7.61, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.2.122 port 52213 connected with 192.168.7.61 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.2 sec   756 MBytes   619 Mbits/sec


Now moving to 2.0 beta, running on IMS (intel modular server, much newer cpu):

Code:
C:\iperf-2.0.5-cygwin>iperf.exe -c 192.168.7.61
------------------------------------------------------------
Client connecting to 192.168.7.61, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.2.215 port 49175 connected with 192.168.7.61 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  4.68 GBytes  4.00 Gbits/sec

great!
 
Hi again, sorry for bumping my old topic but I still haven't found a solution for this.

First of all, let me say that nothing has changed in terms to configuration and hardware.

Since the last time the I've tried xenserver on the same hardware and the same guest OS with the same settings without any problems.
But due to the need for a mixed platform with both KVM and VZ and the great features proxmox have, I've returned to proxmox.
So promox itself is freshly installed from the latest stable.

So we can rule out that it's a hw issue.

I've (again) tried both e1000 and virtio and every single driver version to this date without success.

So let me sum it up;
Windows (XP/2003/2008r2) guests is the only ones I am having severe performance issues with.
Linux guests (both KVM and VZ) thru the same exact bridge, both external and internal, has no problems.
XenServer + Windows guests on the same hardware works super.

I am not trying to be rude or complain, I am just desperate in need for someone to look at this with fresh eyes and hope someone has any ideas!
I am open for anything.


pve-manager: 1.9-26 (pve-manager/1.9/6567)
running kernel: 2.6.32-6-pve
proxmox-ve-2.6.32: 1.9-47
pve-kernel-2.6.32-6-pve: 2.6.32-55+ovzfix-1
qemu-server: 1.1-32
pve-firmware: 1.0-15
libpve-storage-perl: 1.0-19
vncterm: 0.9-2
vzctl: 3.0.29-3pve1
vzdump: 1.2-16
vzprocps: 2.0.11-2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.15.0-2
ksm-control-daemon: 1.0-6z


-phat
 
I am not of much help, but seeing the same results. My server and client are both on a gig switch, but only get about 15MB of throughput on SMB shares - any ideas. here is my situation:

Proxmox 2.0
2008 x64 guest (only vm provisioned) using virtio on LAN/hdd, with Write Back on RAW

Dell T410 2x 2.40 Xeon (8 cores to VM)
16gb RAM (12 to VM)
Perc6i w/ 512mb cache + battery
2x 500gb SATA drives (i know this is the weakest link)

Code:
root@pve1:~# pveperf
CPU BOGOMIPS:      36174.78
REGEX/SECOND:      628301
HD SIZE:           94.49 GB (/dev/mapper/pve-root)
BUFFERED READS:    123.33 MB/sec
AVERAGE SEEK TIME: 8.95 ms
FSYNCS/SECOND:     2136.91
 
Last edited:
I am not of much help, but seeing the same results. My server and client are both on a gig switch, but only get about 15MB of throughput on SMB shares - any ideas. here is my situation:

Proxmox 2.0
2008 x64 guest (only vm provisioned) using virtio on LAN/hdd, with Write Back on RAW

Dell T410 2x 2.40 Xeon (8 cores to VM)
16gb RAM (12 to VM)
Perc6i w/ 512mb cache + battery
2x 500gb SATA drives (i know this is the weakest link)

Code:
root@pve1:~# pveperf
CPU BOGOMIPS:      36174.78
REGEX/SECOND:      628301
HD SIZE:           94.49 GB (/dev/mapper/pve-root)
BUFFERED READS:    123.33 MB/sec
AVERAGE SEEK TIME: 8.95 ms
FSYNCS/SECOND:     2136.91
Hi,
only a wild guess, but have you tried to disable HT in the bios and use only 4 cores in the VM? (because one CPU has only 4 real cores)?

To see if network or disk is the problem, you can also use iperf to measure the troughput (jperf has an iperf.exe inside the package) in both directions.

Udo
 
Sorry for unearth this topic, but I'm wondering if you finally found a solution to your problem? (given that it looks like I'm currently exactly in the same situation)

In fact I don't use Proxmox, but Fedora 19 (x86-64). So I have the near latest kernel/kvm/qemu/libvirt packages version.
I made a Windows XP Pro SP3 guest on top of this. It's configured with a macvtap device and the network works, but it's horribly slow! (iperf says ~6 Mbits on a 100 Mbits LAN link). I confirmed these poor results with a FTP transfer. I tested with a Fedora 19 guest VM on the same host (same hardware, same VM xml config, same LAN link) => here the perfs are normal (~90 Mbits).

I'm pretty sure all is correctly configured : virtio (with up to date windows drivers), vhost=on.
I activated the checksum offload on the virtio windows drivers configuration, but it's the same.
I tested with the e1000 driver instead, but it's the same.
I tested with only one core in the VM, but it's the same.
I also tuned the Windows registry with the tips from the kvm website, but it's the same.

That said, with iperf, the default TCP windows size on Windows is 64k. When I put it to 256k or more, I get good performance! (~90 Mbits). But I get slow perf when I go to 128k or less.

My NIC is an Intel Gigabit Desktop.
I tested with Oracle Linux 6.4 too (kernel 3.0), it's the same poor result on Windows guest.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!