Proxmox 2 rc updated to today slow virtio performance on win2k8r2 guest to host

glanc

Renowned Member
Mar 19, 2010
21
0
66
Proxmox 2 rc updated, slow performance from win2k8r2 guest to host! Host (proxmox) to host (another physical host on the same giga switch) it's near 1gbit. Latest virtio net installed in guest virtio-win-0.1-22.iso, tried the registry hack (actually using netsh on win2k8) but without any changes! Can anyone that has fixed the issue please post the exact command line (netsh) parameters to pass to windows to get decent speed? Thanks a lot.
 
Proxmox 2 rc updated, slow performance from win2k8r2 guest to host! Host (proxmox) to host (another physical host on the same giga switch) it's near 1gbit. Latest virtio net installed in guest virtio-win-0.1-22.iso, tried the registry hack (actually using netsh on win2k8) but without any changes! Can anyone that has fixed the issue please post the exact command line (netsh) parameters to pass to windows to get decent speed? Thanks a lot.

Hi,
have you tried the e1000-driver?
Normaly the e1000-driver speed is good (not so good like virtio-net, but much more stable).

Do you test with iperf?

Udo
 
Hi, how do you benchmark between you windows guest and proxmox host ? Is guest on the host you try to bench ?


my win2003 tunning

Code:
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\AFD\Parameters]
"DefaultSendWindow"=dword:00100000
"DefaultReceiveWindow"=dword:00100000
"FastSendDatagramThreshold"=dword:00004000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters]
"Tcp1323Opts"=dword:00000001
"TcpWindowSize"=dword:00100000
"TcpTimedWaitDelay"=dword:0000001e
"MaxUserPort"=dword:0000fffe
"MaxFreeTWTcbs"=dword:000007d0
"MaxFreeTcbs"=dword:00003e80
"NumTcbTablePartitions "=dword:00000020
"MaxHashTableSize"=dword:00010000
"EnableTCPChimney"=dword:00000000
"EnableRSS"=dword:00000000
"EnableTCPA"=dword:00000000


I also disable "offload tx lso" in network card properties.
 
Last edited:
Hello dear friend,

I'm testing with iperf in windows and linux. I've used virtio net and block in a previous proxmox 1.9 system without stability problem and now I'm trying to do the same on a proxmox 2.0 rc. So far no stability issue just testing performance. Here what I've tested so far:

FROM KVM Guest (0.210) (WinSBS2011 standard_virtio) TO Physical HOST (0.251)(Openmediavault Linux Debian)
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.0.251 port 5001 connected with 192.168.0.210 port 10805
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-61.7 sec 1.10 GBytes 153 Mbits/sec

FROM Physical HOST (0.251)(Openmediavault Linux Debian) TO KVM Guest (0.210) (WinSBS2011 standard_virtio)
C:\Windows\system32>iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[432] local 192.168.0.210 port 5001 connected with 192.168.0.251 port 51062
[ ID] Interval Transfer Bandwidth
[432] 0.0-58.5 sec 3.48 GBytes 511 Mbits/sec

FROM Physical HOST (0.251)(Openmediavault Linux Debian) TO Physical HOST (0.253)(Proxmox 2.0 rc)
------------------------------------------------------------
Client connecting to 192.168.0.253, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.0.251 port 56386 connected with 192.168.0.253 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-60.0 sec 6.21 GBytes 889 Mbits/sec

FROM Physical HOST (0.253)(Proxmox 2.0 rc) TO Physical HOST (0.251)(Openmediavault Linux Debian)
------------------------------------------------------------
Client connecting to 192.168.0.251, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.0.253 port 40390 connected with 192.168.0.251 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-60.0 sec 6.43 GBytes 921 Mbits/sec

FROM KVM Guest (0.118) (Win7PROx64_virtio) TO KVM Guest (0.210) (WinSBS2011 standard_virtio)
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[432] local 192.168.0.210 port 5001 connected with 192.168.0.118 port 51707
[ ID] Interval Transfer Bandwidth
[432] 0.0-59.9 sec 1.72 GBytes 247 Mbits/sec

FROM KVM Guest (0.210) (WinSBS2011 standard_virtio) TO KVM Guest (0.118) (Win7PROx64_virtio)
------------------------------------------------------------
Client connecting to 192.168.0.118, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[172] local 192.168.0.210 port 32993 connected with 192.168.0.118 port 5001
[ ID] Interval Transfer Bandwidth
[172] 0.0-60.0 sec 1.73 GBytes 248 Mbits/sec

Also attach disk performance of KVM guest SBS 2011 on a Megaraid RAID5 LVM.Screen Shot 2012-03-03 at 11.14.02 AM.png

I would like to know if these results re decent/expected or I can tune it. Thanks a lot.


pveversion -v
pve-manager: 2.0-37 (pve-manager/2.0/d6e2622b)
running kernel: 2.6.32-7-pve
proxmox-ve-2.6.32: 2.0-60
pve-kernel-2.6.32-7-pve: 2.6.32-60
lvm2: 2.02.88-2pve1
clvm: 2.02.88-2pve1
corosync-pve: 1.4.1-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.8-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.7-1
pve-cluster: 1.0-23
qemu-server: 2.0-23
pve-firmware: 1.0-15
libpve-common-perl: 1.0-15
libpve-access-control: 1.0-16
libpve-storage-perl: 2.0-12
vncterm: 1.0-2
vzctl: 3.0.30-2pve1
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.0-5
ksm-control-daemon: 1.1-1
 
Hi, 250mbit/s seem very low.

could you try iperf with

-w 200k

parameter (both client/server) ? (this will increase windows size)


also could you provide on your host:
# what is your network card model ?

# lsmod|grep vhost_net

# ethtool -k eth0 (apt-get install ethtool, must be interisting to compare results with proxmox 1.9).





I'll make some tests on my side with last virtio driver.
 
Hello here are the tests. My proxmox is on a Fujitsu server with four nic, one is used for irmc (remote management like hp ilo), one is on firewall dmz for remote connection from outside configured as vmbr0, and the other two are bonded and configured as vmbr1 for guests.

KVM Guest to KVM Guest with windows size 200k:

FROM KVM Guest (0.118) (Win7PROx64_virtio) TO KVM Guest (0.210) (WinSBS2011 standard_virtio)

C:\Windows\system32>iperf -c 192.168.0.210 -w 200k
------------------------------------------------------------
Client connecting to 192.168.0.210, TCP port 5001
TCP window size: 200 KByte
------------------------------------------------------------
[148] local 192.168.0.118 port 49573 connected with 192.168.0.210 port 5001
[ ID] Interval Transfer Bandwidth
[148] 0.0-10.0 sec 807 MBytes 676 Mbits/sec

FROM KVM Guest (0.210) (WinSBS2011 standard_virtio) TO KVM Guest (0.118) (Win7PROx64_virtio)

C:\Windows\system32>iperf -c 192.168.0.118 -w 200k
------------------------------------------------------------
Client connecting to 192.168.0.118, TCP port 5001
TCP window size: 200 KByte
------------------------------------------------------------
[172] local 192.168.0.210 port 50770 connected with 192.168.0.118 port 5001
[ ID] Interval Transfer Bandwidth
[172] 0.0-10.0 sec 629 MBytes 527 Mbits/sec


root@proxmox:~# ethtool -k eth0
Offload parameters for eth0:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: off
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off
ntuple-filters: off
receive-hashing: on

root@proxmox:~# ethtool -k eth1
Offload parameters for eth1:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: on
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off
ntuple-filters: off
receive-hashing: on

root@proxmox:~# ethtool -k eth2
Offload parameters for eth2:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: on
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off
ntuple-filters: off
receive-hashing: on

root@proxmox:~# lsmod|grep vhost_net
vhost_net 31432 0
macvtap 8956 1 vhost_net
tun 19013 5 vhost_net

root@proxmox:~# mii-tool
SIOCGMIIREG on eth0 failed: Input/output error
SIOCGMIIREG on eth0 failed: Input/output error
eth0: negotiated 100baseTx-FD, link ok
eth1: negotiated 1000baseT-FD flow-control, link ok
eth2: negotiated 1000baseT-FD flow-control, link ok


root@proxmox:~# ifconfig
bond0 Link encap:Ethernet HWaddr 00:19:99:b4:e9:9d
inet6 addr: fe80::219:99ff:feb4:e99d/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:34960089 errors:0 dropped:0 overruns:0 frame:0
TX packets:25342848 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:29735107739 (27.6 GiB) TX bytes:33784598801 (31.4 GiB)

eth0 Link encap:Ethernet HWaddr 00:19:99:b7:60:e0
inet6 addr: fe80::219:99ff:feb7:60e0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2688 errors:0 dropped:0 overruns:0 frame:0
TX packets:868656 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1749014 (1.6 MiB) TX bytes:424800558 (405.1 MiB)
Interrupt:16 Memory:ce320000-ce340000

eth1 Link encap:Ethernet HWaddr 00:19:99:b4:e9:9d
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:21280286 errors:0 dropped:0 overruns:0 frame:0
TX packets:12668512 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:20226621831 (18.8 GiB) TX bytes:16889087384 (15.7 GiB)

eth2 Link encap:Ethernet HWaddr 00:19:99:b4:e9:9d
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:13679803 errors:0 dropped:0 overruns:0 frame:0
TX packets:12674336 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:9508485908 (8.8 GiB) TX bytes:16895511417 (15.7 GiB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:892052 errors:0 dropped:0 overruns:0 frame:0
TX packets:892052 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:153719707 (146.5 MiB) TX bytes:153719707 (146.5 MiB)

tap100i0 Link encap:Ethernet HWaddr 8a:9b:6d:1e:22:31
inet6 addr: fe80::889b:6dff:fe1e:2231/64 Scope:Link
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1
RX packets:3115918 errors:0 dropped:0 overruns:0 frame:0
TX packets:13988198 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:12772897529 (11.8 GiB) TX bytes:15969994612 (14.8 GiB)

tap101i0 Link encap:Ethernet HWaddr 6a:43:cf:f5:6b:f7
inet6 addr: fe80::6843:cfff:fef5:6bf7/64 Scope:Link
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1
RX packets:353215 errors:0 dropped:0 overruns:0 frame:0
TX packets:943067 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:866307410 (826.1 MiB) TX bytes:1369406667 (1.2 GiB)

venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet6 addr: fe80::1/128 Scope:Link
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:3 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

vmbr0 Link encap:Ethernet HWaddr 00:19:99:b7:60:e0
inet addr:172.16.33.253 Bcast:172.16.33.255 Mask:255.255.255.0
inet6 addr: fe80::219:99ff:feb7:60e0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2686 errors:0 dropped:0 overruns:0 frame:0
TX packets:731908 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1711278 (1.6 MiB) TX bytes:415741097 (396.4 MiB)

vmbr1 Link encap:Ethernet HWaddr 00:19:99:b4:e9:9d
inet addr:192.168.0.253 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::219:99ff:feb4:e99d/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8764656 errors:0 dropped:0 overruns:0 frame:0
TX packets:6441047 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:11291722603 (10.5 GiB) TX bytes:14411410933 (13.4 GiB)

root@proxmox:~# brctl show
bridge name bridge id STP enabled interfaces
vmbr0 8000.001999b760e0 no eth0
vmbr1 8000.001999b4e99d no bond0
tap100i0
tap101i0
 
I think we have almost the same result.
Is it really faster on proxmox 1.9 ? (with same virtio driver version).

I know last year with old virtio driver, i had around 900mbits,but also a lot of random network hang.
If I remember, redhat have had some security, but this have slowed down the driver.

I see some new commits in git, about but in tcp/udp checksum offload bug in win7

https://github.com/YanVugenfirer/kv...mmit/2a84ea561924083ee7382677b082857aa2327020

maybe try to remove offload for the moment (in you guest card properties)..
 
Last edited:
Ok, thanks a lot. I meant that i was using virtionet and was stable on 1.9 not that was faster. Thanks I appreciated your help.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!