WIn7/64bit gig NIC performance - VirtIO and e1000

fortechitsolutions

Renowned Member
Jun 4, 2008
471
59
93
Hi all, quick 'fun' question. I've got a stock install ProxVE server (version 1.8, kernel version is "2.6.32-4-pve") with a Win7 VM installed (initially with VirtIO Ether and Paravirt HDD controller).

Only issue - I gave it some serious traffic today and note the gig ether nic performance is absolutely terrible. Best I can get is ~1.5Mb/second throughput if I'm lucky.

This is via local gig-ether network, nice bnx nics under the hardware hood of the physical host (dell servers) and I can easily get ~50-75 Mb/sec between my physical servers (ie, proxmox to proxmox direct / linux SCP)

I tried change over to e1000 driver in the hope this might be better, but I have no joy. I've already made one small tune to 'net' cmd line parameters recommended as per proxmox / KVM NIC tuning but to no avail.

Is there any known workaround to get decent performance (gig performance ..) even remotely close to the capacity of the underlying hardware for Win 7 / 2008 VMs on KVM these days ?!

(I note there is a thread sort-of-on-this-topic currently, but the fix appeared to involve using a non-free-to-distribute redhat supported ISO of latest KVM Virtio device driver? - ie not really a fix...)

Thanks,

Tim
 
Last edited:
A small footnote on this: In case it might have helped, I just updated this ProxVE host to the 2.6.35 kernel, and repeated my tests of e1000 and VIRTIO NICs. There is no apparent change, ie, performance is still horrid. sigh.
 
Now that's intereresting - I'll also give a try with them. (have a similar problem)
 
try virtio nic drivers here :
http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/virtio-win-0.1-mm34.iso

they are more recent than offical redhat 1.2.0.

they correct a frezze bug with nic on win2008/Seven.

how did you bench your network ?

I had made benchmark with iperf , between 2 win2003, 800-900mbit/s with virtio nic.

i don't have yet made bench with win2008. I'll try today if you want, to post some results.

Hi, thanks for the reply; sorry for ambiguity in my post. I hadn't actually used the redhat 1.2 virtio - I was using exactly the ISO you had linked to (since this is the link provided in the ProxVE wiki on the process. Note that I don't have any problems with 'freezing bug' - things run very smoothly; just slowly). My 'baseline' quick bench is just to run WinSCP and push a file to and then from a neighbour ProxVE host.

Last night I was tweaking various parameters in the VirtIO NIC device config; performance appeared to range from ~700Kb/s through ~2300Kb/s. (ie, adjust buffer size, hardware offload settings, etc. It wasn't simply a matter of 'turn it all on / dial it all up' to get 'better' throughput. I ended up deleting and reinstalling NICs to roll back this tweaking since it only had 'modest' impact.)

For reference, I do a scp on the command line between the same underlying physical hosts (ie, the ProxVE server hosting the Win7 VM to the same second ProxVE box) - and I get easily 25 Mb/s. FTP is faster but I'm just using SCP for basic throughput light bulb testing. As you say there are 'more robust' tests but this is just basic throughput testing (and it is fairly terrible; consistently.)

Just for fun / for an 'external baseline ref' I I remove the VirtIO nic and tested a virtual realtek NIC in the Win7 VM and had similar performance. (This is still bad performance for a 100Mb nic.) But the fact that 'any nic tested' is currently giving this level of performance is disturbing.

I've used Win2003,WinXP VMs on ProxVE with either realtek, e1000, or VirtIO - with much better baseline performance (ie, more in keeping with your experience). Hence my 'upset' with the current behaviour.

If you have any comments about 'sane tuning' I should be looking at, I'm certainly interested.

Many thanks,

Tim
 

Hi Tom, thanks for the post. I had looked at the link for 'secret tuning' as per the URL: http://www.linux-kvm.org/page/WindowsGuestDrivers/kvmnet/registry

M
ost of the changes are for Win2003/WinXP; and then it says ".. the above tweaks are not relevant for Vista, Win7, Win2008...".

However - you are right - there are a couple of things in the ~bottom half of the page which still are suitable for Win7 so I'll do those and see what happens. Will post followup shortly.


Tim
 
Footnote on this: Just made 3 changes, as per the URL: http://www.linux-kvm.org/page/WindowsGuestDrivers/kvmnet/registry - which I believe is all there is for a Win7 system.

netsh int tcp set heuristics disabled
netsh int tcp set global autotuninglevel=disabled
netsh int tcp set global congestionprovider=ctcp

I'm not seeing any significant change. Transfer rate started at ~900Kb/s, ramped up to ~2400Kb/s briefly, and then back down again. Average was ~ 1200; so it took ~45 seconds to move a ~75Mb file via WinSCP across the VirtIO interface to a nearby physical host.

Sigh.



 
winscp is not a benchmark tool, I suggest you use iperf.
 
Hi, indeed- sounds good.

?Thought I just posted a reply on this thread but it is gone - so will redo?

Intersting footnote - consistent? with your advice not to use only WinSCP as 'benchmark'.

I gave up on testing this for a bit, proceeded with install/config of services. Observed transfer from another local system (OpenVZ based VM on another ProxVE host) via FTP to this Win7VM with throughput pushing ~35Mb/s. So very nice performance. Re-did WinSCP based test between same WIn7 and same OpenVZ VM, get ~1 Mb/sec. So quite terrible. Fortunately the WinSCP is irrelevant, and since I have good performance where it counts - I'm good.

I will do tests with iperf and see what they reveal; post those in a bit.

Tim
 
Last edited:
So, clearly I should have used iperf right from the start.

Numbers are fine. Not clear why WinSCP performance is horrid here but that is unrelated and far less concern.

Basic test below,

iperf as server on the Win7 host
iperf as client on OpenVZ VM on a nearby physical host, vanilla gig ether bnx physical interfaces, non-jumbo stock setup

Code:
[root@tcmaster src]# ./iperf -c 192.168.15.105
------------------------------------------------------------
Client connecting to 192.168.15.105, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.15.102 port 56645 connected with 192.168.15.105 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   564 MBytes   473 Mbits/sec
[root@tcmaster src]#
 
iperf results looks expected.