Bad Network Performance with KVM Guests

tobru

Member
May 19, 2008
31
0
6
Hi,

The KVM Guests have poor network performance. I tried several things, like changing the adapter type...
Here are some iperf tests:

Host1 to Host2: 942Mbit/s
Host1 to OpenVZ Guest on Host2: 941Mbit/s
KVM Guest (virtio) on Host1 to Host2: 300 - 442Mbit/s
KVM Guest (e1000) on Host1 to Host2: 211Mbit/s

While doing this iperf tests, the KVM Guests generate a high CPU Load on the Host. Perhaps this is normal, I'm not shure.

pveversion Host1:
Code:
pve-manager/1.5/4561
mabumba:~# pveversion --all
Unknown option: all
USAGE: pveversion [--verbose]
mabumba:~# pveversion --verbose
pve-manager: 1.5-1 (pve-manager/1.5/4561)
running kernel: 2.6.32-1-pve
proxmox-ve-2.6.32: 1.5-2
pve-kernel-2.6.32-1-pve: 2.6.32-2
pve-kernel-2.6.24-9-pve: 2.6.24-18
pve-kernel-2.6.24-8-pve: 2.6.24-16
qemu-server: 1.1-10
pve-firmware: 1.0-3
libpve-storage-perl: 1.0-6
vncterm: 0.9-2
vzctl: not correctly installed
vzdump: 1.2-5
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.11.1-1
ksm-control-daemon: 1.0-2
pveversion Host2:
Code:
pve-manager: 1.4-10 (pve-manager/1.4/4403)
qemu-server: 1.1-8
pve-kernel: 2.6.24-16
pve-qemu-kvm: 0.11.0-2
pve-firmware: 1
vncterm: 0.9-2
vzctl: 3.0.23-1pve3
vzdump: 1.2-5
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
Has anyone an idea what I could do to get better network performance in the KVM guests?

Thanks!
Tobias
 
Hi,
i try a short test - kvm (virtio) host1 to kvm (e1000) host2 and i got results between 494 and 554 Mbit/s. The same on the other direction.
CPU-load goes up (50-100%).
What kind of CPUs (performance) has your host?
What NIC?

Udo
 
Here are some hardware details

Code:
host1:~# cat /proc/cpuinfo
processor       : 0
vendor_id       : AuthenticAMD
cpu family      : 15
model           : 107
model name      : AMD Athlon(tm) 64 X2 Dual Core Processor 5000+
stepping        : 2
cpu MHz         : 2600.000
cache size      : 512 KB
physical id     : 0
siblings        : 2
core id         : 0
cpu cores       : 2
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 1
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow rep_good extd_apicid pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy 3dnowprefetch
bogomips        : 5200.24
TLB size        : 1024 4K pages
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management: ts fid vid ttp tm stc 100mhzsteps

processor       : 1
vendor_id       : AuthenticAMD
cpu family      : 15
model           : 107
model name      : AMD Athlon(tm) 64 X2 Dual Core Processor 5000+
stepping        : 2
cpu MHz         : 2600.000
cache size      : 512 KB
physical id     : 0
siblings        : 2
core id         : 1
cpu cores       : 2
apicid          : 1
initial apicid  : 1
fpu             : yes
fpu_exception   : yes
cpuid level     : 1
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow rep_good extd_apicid pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy 3dnowprefetch
bogomips        : 5200.06
TLB size        : 1024 4K pages
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management: ts fid vid ttp tm stc 100mhzsteps
Ethernet controller: nVidia Corporation MCP77 Ethernet (rev a2)

Code:
host2:~# cat /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Xeon(R) CPU           E5320  @ 1.86GHz
stepping        : 11
cpu MHz         : 1861.912
cache size      : 4096 KB
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 4
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr dca lahf_lm
bogomips        : 3727.08
clflush size    : 64
cache_alignment : 64
address sizes   : 38 bits physical, 48 bits virtual
power management:

processor       : 1
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Xeon(R) CPU           E5320  @ 1.86GHz
stepping        : 11
cpu MHz         : 1861.912
cache size      : 4096 KB
physical id     : 0
siblings        : 4
core id         : 1
cpu cores       : 4
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr dca lahf_lm
bogomips        : 3723.83
clflush size    : 64
cache_alignment : 64
address sizes   : 38 bits physical, 48 bits virtual
power management:

processor       : 2
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Xeon(R) CPU           E5320  @ 1.86GHz
stepping        : 11
cpu MHz         : 1861.912
cache size      : 4096 KB
physical id     : 0
siblings        : 4
core id         : 2
cpu cores       : 4
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr dca lahf_lm
bogomips        : 3723.85
clflush size    : 64
cache_alignment : 64
address sizes   : 38 bits physical, 48 bits virtual
power management:

processor       : 3
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Xeon(R) CPU           E5320  @ 1.86GHz
stepping        : 11
cpu MHz         : 1861.912
cache size      : 4096 KB
physical id     : 0
siblings        : 4
core id         : 3
cpu cores       : 4
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr dca lahf_lm
bogomips        : 3723.85
clflush size    : 64
cache_alignment : 64
address sizes   : 38 bits physical, 48 bits virtual
power management:
Ethernet controller: Broadcom Corporation NetXtreme BCM5721 Gigabit Ethernet PCI Express (rev 21)
 
Hi,
your amd-host is comparable to one of my test-hosts.
But i have make a second test to the host. Strange is, that the bandwith from kvm-guest to the host is not so good like to another kvm-guest - host to host are good (930Mbit/s). The results are comparable to yours.
I have to say, that the server now are not calm - not very busy but some networktraffic is there.

Udo
 
I have the same issue. I am running kernel 2.6.32 and using virtio drivers for both network and disk. The network performance between host to guest is around 450Mbits/s. I have tried doing iperf tests on all my kvm guests and get around the same speeds. I thought the virtio network driver is suppose to see near native speeds? Anyone else having the same performance issues or have a solution?

- Garrett
 
Would not this be expected behavior? The network card and other "virtual hardware" is emulated in software so its using cpu cycles, like in old days software modem...
 
Would not this be expected behavior? The network card and other "virtual hardware" is emulated in software so its using cpu cycles, like in old days software modem...

Not really. I don't expect native network performance, around 10% less performance would be ok.

I did some further test and startet some iperf tests on my two different hosts (see post above). This is what I get:

virtio
Code:
KVM on Host2 -> Host2
[  3] local 10.0.0.26 port 56369 connected with 10.0.0.8 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec    843 MBytes    707 Mbits/sec
[  3] 10.0-20.0 sec    876 MBytes    735 Mbits/sec
[  3] 20.0-30.0 sec    881 MBytes    739 Mbits/sec
[  3]  0.0-30.0 sec  2.54 GBytes    727 Mbits/sec

KVM on Host2 -> Host1
[  3] local 10.0.0.26 port 47450 connected with 10.0.0.10 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec    721 MBytes    605 Mbits/sec
[  3] 10.0-20.0 sec    710 MBytes    596 Mbits/sec
[  3] 20.0-30.0 sec    720 MBytes    604 Mbits/sec
[  3]  0.0-30.0 sec  2.10 GBytes    601 Mbits/sec

KVM on Host1 -> Host1
[  3] local 10.0.0.26 port 47451 connected with 10.0.0.10 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec    223 MBytes    187 Mbits/sec
[  3] 10.0-20.0 sec    299 MBytes    251 Mbits/sec
[  3] 20.0-30.0 sec    326 MBytes    273 Mbits/sec
[  3]  0.0-30.0 sec    848 MBytes    237 Mbits/sec

KVM on Host1 -> Host2
[  3] local 10.0.0.26 port 35310 connected with 10.0.0.8 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec    278 MBytes    233 Mbits/sec
[  3] 10.0-20.0 sec    396 MBytes    332 Mbits/sec
[  3] 20.0-30.0 sec    402 MBytes    337 Mbits/sec
[  3]  0.0-30.0 sec  1.05 GBytes    301 Mbits/sec
e1000
Code:
KVM on Host2 -> Host2
[  3] local 10.0.0.26 port 58438 connected with 10.0.0.8 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec    265 MBytes    222 Mbits/sec
[  3] 10.0-20.0 sec    259 MBytes    218 Mbits/sec
[  3] 20.0-30.0 sec    274 MBytes    230 Mbits/sec
[  3]  0.0-30.0 sec    799 MBytes    223 Mbits/sec

KVM on Host2 -> Host1
[  3] local 10.0.0.26 port 33318 connected with 10.0.0.10 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec    243 MBytes    204 Mbits/sec
[  3] 10.0-20.0 sec    231 MBytes    194 Mbits/sec
[  3] 20.0-30.0 sec    217 MBytes    182 Mbits/sec
[  3]  0.0-30.0 sec    692 MBytes    194 Mbits/sec

KVM on Host1 -> Host1
[  3] local 10.0.0.26 port 53634 connected with 10.0.0.10 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec    225 MBytes    189 Mbits/sec
[  3] 10.0-20.0 sec    226 MBytes    189 Mbits/sec
[  3] 20.0-30.0 sec    228 MBytes    191 Mbits/sec
[  3]  0.0-30.0 sec    679 MBytes    190 Mbits/sec

KVM on Host1 -> Host2
[  3] local 10.0.0.26 port 39063 connected with 10.0.0.8 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec    219 MBytes    184 Mbits/sec
[  3] 10.0-20.0 sec    220 MBytes    184 Mbits/sec
[  3] 20.0-30.0 sec    226 MBytes    190 Mbits/sec
[  3]  0.0-30.0 sec    665 MBytes    186 Mbits/sec
The network performance on host2 is much better than the same KVM VM on host1. And virtio performs much better than e1000. I think host1 is the problem, perhaps the hardware is too slow (I don't think so) or the network card driver is bad (that's what I think is it - nVidia Ethernet Controller is probable not the best supported nic on linux).

Tobias
 
Not all network cards created equal, you may want to try replacing nvidia network card. Intel network cards usually the best.
 
Can anyone post their performance using intel cards? I want to see if that would justify the cost of getting intel cards. These are my network cards in my system:

02:00.0 Ethernet controller: Marvell Technology Group Ltd. 88E8053 PCI-E Gigabit Ethernet Controller (rev 20)
03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 02)

My performance from kvm to host is around 430-500Mbits/s. Doing tests from host to the actual proxmox host is around 940-980Mbits/s. A 50% drop in performance is not what I expect for a near native clam. As tobru stated, a 10% performance hit is what I would expect. My hardware is an AMD 9150e 1.8ghz quad core cpu with 8GB of DDR2 800 ram. I would think that the system would be able to get good performance using virtio_net driver. I would buy a few intel cards as they are not that expensive, but I am limited on slots in the machine.

- Garrett
 
Hi,

I added the following card to my AMD system:

Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)

Here are some iperf tests

Code:
KVM on Host1 -> Host1
[  3] local 10.0.0.26 port 43720 connected with 10.0.0.10 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec    367 MBytes    308 Mbits/sec
[  3] 10.0-20.0 sec    468 MBytes    393 Mbits/sec
[  3] 20.0-30.0 sec    432 MBytes    362 Mbits/sec
[  3]  0.0-30.0 sec  1.24 GBytes    354 Mbits/sec

KVM on Host1 -> Host2
[  3] local 10.0.0.26 port 55119 connected with 10.0.0.8 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec    357 MBytes    300 Mbits/sec
[  3] 10.0-20.0 sec    549 MBytes    461 Mbits/sec
[  3] 20.0-30.0 sec    549 MBytes    460 Mbits/sec
[  3]  0.0-30.0 sec  1.42 GBytes    407 Mbits/sec

Host1 -> Host2
[  3] local 10.0.0.10 port 54784 connected with 10.0.0.8 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.10 GBytes    941 Mbits/sec
[  3] 10.0-20.0 sec  1.10 GBytes    943 Mbits/sec
[  3] 20.0-30.0 sec  1.10 GBytes    941 Mbits/sec
[  3]  0.0-30.0 sec  3.29 GBytes    942 Mbits/sec
As we can see, it's a little bit better than before, but not yet what it should be.
I try to get an Intel card for doing this tests again. But the broadcom card is a high end card, so I'm not shure if it will be better with the Intel card.
 
Anyone with an intel card want to post their iperf results? KVM to Host and KVM to KVM tests would be nice. Also using the viritio driver would be great as well.

- Garrett
 
Hi,

I added the following card to my AMD system:

Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)

Here are some iperf tests

Code:
KVM on Host1 -> Host1
[  3] local 10.0.0.26 port 43720 connected with 10.0.0.10 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec    367 MBytes    308 Mbits/sec
[  3] 10.0-20.0 sec    468 MBytes    393 Mbits/sec
[  3] 20.0-30.0 sec    432 MBytes    362 Mbits/sec
[  3]  0.0-30.0 sec  1.24 GBytes    354 Mbits/sec

KVM on Host1 -> Host2
[  3] local 10.0.0.26 port 55119 connected with 10.0.0.8 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec    357 MBytes    300 Mbits/sec
[  3] 10.0-20.0 sec    549 MBytes    461 Mbits/sec
[  3] 20.0-30.0 sec    549 MBytes    460 Mbits/sec
[  3]  0.0-30.0 sec  1.42 GBytes    407 Mbits/sec

Host1 -> Host2
[  3] local 10.0.0.10 port 54784 connected with 10.0.0.8 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.10 GBytes    941 Mbits/sec
[  3] 10.0-20.0 sec  1.10 GBytes    943 Mbits/sec
[  3] 20.0-30.0 sec  1.10 GBytes    941 Mbits/sec
[  3]  0.0-30.0 sec  3.29 GBytes    942 Mbits/sec
As we can see, it's a little bit better than before, but not yet what it should be.
I try to get an Intel card for doing this tests again. But the broadcom card is a high end card, so I'm not shure if it will be better with the Intel card.

Hi,
i have made a test with the two new servers with intelcards:
Code:
host1 -> host2
[  3]  0.0-10.0 sec  1.10 GBytes    941 Mbits/sec

kvm(host2) -> host2
[  4]  0.0-10.0 sec    600 MBytes    503 Mbits/sec

kvm(host2 -> host1
[  4]  0.0-10.0 sec  1018 MBytes    851 Mbits/sec

kvm(host2) -> kvm(host1)
[  4]  0.0-10.0 sec  1.07 GBytes    919 Mbits/sec
This looks not to bad... but the server have also more power as the other one from the first test.

Udo
 
02:00.0 Ethernet controller: Marvell Technology Group Ltd. 88E8053 PCI-E Gigabit Ethernet Controller (rev 20)
03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 02)

- Garrett


Good Morning

As proposed, go with a Intel NIC. use a dual port NIC
There is a known issue with the current build in Realtek driver RTL8111/8168B
If the Realtek NIC is under heavy traffic the the stability is flaky and the eth0 link goes up and down. (timeouts)

you need to compile the latest Realtek driver by your own.
I found this accidentally here : http://wiki.hetzner.de/index.php/Installation_des_r8168-Treibers (in German)
but it shows anyhow just how you compile the driver.
 
Hi,

Finally I added a DualPort Intel NIC to the AMD System.

03:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06)
03:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06)

Here are the iperf results:

Code:
KVM on Host1 -> Host1
[  3] local 10.0.0.26 port 36601 connected with 10.0.0.10 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec    472 MBytes    396 Mbits/sec
[  3] 10.0-20.0 sec    610 MBytes    512 Mbits/sec
[  3] 20.0-30.0 sec    600 MBytes    503 Mbits/sec
[  3]  0.0-30.0 sec  1.64 GBytes    470 Mbits/sec

KVM on Host1 -> Host2
[  3] local 10.0.0.26 port 35537 connected with 10.0.0.8 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec    626 MBytes    525 Mbits/sec
[  3] 10.0-20.0 sec    657 MBytes    551 Mbits/sec
[  3] 20.0-30.0 sec    661 MBytes    555 Mbits/sec
[  3]  0.0-30.0 sec  1.90 GBytes    544 Mbits/sec

Host1 -> Host2
[  3] local 10.0.0.10 port 43309 connected with 10.0.0.8 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.10 GBytes    943 Mbits/sec
[  3] 10.0-20.0 sec  1.09 GBytes    938 Mbits/sec
[  3] 20.0-30.0 sec  1.10 GBytes    942 Mbits/sec
[  3]  0.0-30.0 sec  3.29 GBytes    940 Mbits/sec
A little bit better than with the Broadcom card and much better than with the nVidia card. But it looks like that this is all I can get out of that system...
 
But isnt virtio supposed to be very close to hardware? Compared to using ordinary vnic settings i mean?

With that said, using realtek vnic for win guests, proxmox networking felt like a rocket compared to hyper-v in my tests on my old server.
 
virtio is not reliable for windows guests but linux with latest kernels might be okay...
 
I been running stock Debian 5 without much issues but switched to e1000 just to be on safe side, its your call.
 
There is a known issue with the current build in Realtek driver RTL8111/8168B
If the Realtek NIC is under heavy traffic the the stability is flaky and the eth0 link goes up and down. (timeouts)
Is this still true with the proxmox kernel 2.6.32 ? Do I have to compile a new driver for this hardware?
I am not sure but I think It happened on my host that the Network becomes flaky under load.
 
Hello,
I have the same problems with my envoirement. I'm not using proxmox, but I have identical problems with my kvm guest!

My KVM-Host machine is connected to a 10Gbit Network. All interfaces are configured to a mtu of 4132. On this host I have no problems and I can use the full bandwidth:

CPU_Info:
2x Intel Xeon X5570
flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 popcnt lahf_lm ida tpr_shadow vnmi flexpriority ept vpid

KVM Version:
QEMU PC emulator version 0.12.3 (qemu-kvm-0.12.3), Copyright (c) 2003-2008 Fabrice Bellard
0.12.3+noroms-0ubuntu9

KVM Host Kernel:
2.6.32-22-server #36-Ubuntu SMP Thu Jun 3 20:38:33 UTC 2010 x86_64 GNU/Linux

KVM Host OS:
Ubuntu 10.04 LTS
Codename: lucid

KVM Guest Kernel:
2.6.32-22-server #36-Ubuntu SMP Thu Jun 3 20:38:33 UTC 2010 x86_64 GNU/Linux

KVM Guest OS:
Ubuntu 10.04 LTS
Codename: lucid

KVM-Host
# iperf -c 10.10.80.100 -w 65536 -p 12345 -t 60 -P4
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-60.0 sec 18.8 GBytes 2.69 Gbits/sec
[ 5] 0.0-60.0 sec 15.0 GBytes 2.14 Gbits/sec
[ 6] 0.0-60.0 sec 19.3 GBytes 2.76 Gbits/sec
[ 3] 0.0-60.0 sec 15.1 GBytes 2.16 Gbits/sec
[SUM] 0.0-60.0 sec 68.1 GBytes 9.75 Gbits/sec


Inside a virtual machine don't reach this result:
# iperf -c 10.10.80.100 -w 65536 -p 12345 -t 60 -P 4
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-60.0 sec 5.65 GBytes 808 Mbits/sec
[ 4] 0.0-60.0 sec 5.52 GBytes 790 Mbits/sec
[ 5] 0.0-60.0 sec 5.66 GBytes 811 Mbits/sec
[ 6] 0.0-60.0 sec 5.70 GBytes 816 Mbits/sec
[SUM] 0.0-60.0 sec 22.5 GBytes 3.23 Gbits/sec

I only can use 3,23Gbits of 10Gbits. I use the virtio driver for all of my vms, but I have also tried to use the e1000 nic device instead.

With starting the iperf performance test on multiple vms simultaneously I can use the full bandwidth of the kvm host's interface. But only one vm can't use the full bandwith. Is this a known limitation, or can I improve this performance?

Does anyone have an idea how I can improve my network performance? It's very important, because I want to use the network interface to boot all vms via AOE (ATA over Ethernet).

If I mount a harddisk via AOE inside a vm I get only this results:
Write |CPU |Rewrite |CPU |Read |CPU
102440 |10 |51343 |5 |104249 |3

On the KVM Host I get those results on a mouted AOE Device:
Write |CPU |Rewrite |CPU |Read |CPU
205597 |19 |139118 |11 |391316 |11

If I mount the AOE Device directly on the kvm-host and put a virtual harddisk-file in it I got the following results inside a vm using this harddisk-file:
Write |CPU |Rewrite |CPU |Read |CPU
175140 |12 |136113 |24 |599989 |29

I have already upgraded to a newer kernel version (2.6.35-6) with vhost_net support and have compiled a new qemu-kvm version from git://git.kernel.org/pub/scm/linux/kernel/git/mst/qemu-kvm.git (0.12.50), but without success.
Perhaps the vhost_net "upgrade" will work for you! Do you still have those problems or you did find a solution?

Here a two links for vhost_net support:
http://www.linux-kvm.org/page/VhostNet
http://developer.cybozu.co.jp/tech/2010/06/ubuntu-1004-kvm.html <-- Google will help you to translate it ;)

best regards
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!