Network Performance Issues with a 10Gbps Bridge

Krustaka

New Member
Aug 27, 2013
9
0
1
Hi all,

I have a strange network speed problem on one of the servers and will try go explain here.

1. Servers Involved :
ProxMox2(PM2) - ProxMox hypervisor with 2 physical network adapters - 1Gbps (xx.xx.2.240) and 10Gbps ( xx.xx.75.240)
Physical Server(PS1) with Windows 2008R2 with 2 Physical network Adapters - 1Gbps (xx.xx.2.249) and 10Gbps ( xx.xx.75.4)
Virtual Machine(VM1) hosted by ProxMox1 - Virtual Windows 2008R2 with 2 Virtual (virtio) adapters bridged to the adapters of ProxMox1 1Gbps (xx.xx.2.221) and 10Gbps ( xx.xx.75.221)

2. Network Setup -
All servers are connected via the 1Gbps network
All servers are also connected via the 10Gbps Network

No direct physical connection between the 10Gbps and the 1Gbps network - they are completely isolated

3. Problem - Network throughput between VM1 and PS1 using the 1Gbps Network is 1Gbps, but when using the 10Gbps network it is >100Mbps.

Now to give you more details on the setup:

Code:
ProxMox Machine revisions
root@MSDA-ProxMox2:~# pveversion -v
proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-23-pve)
pve-manager: 3.1-3 (running version: 3.1-3/dc0e9b0e)
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-7
qemu-server: 3.1-1
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-10
pve-libspice-server1: 0.12.4-1
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.0-2

Code:
root@MSDA-ProxMox2:~# ifconfig

eth0      Link encap:Ethernet  HWaddr ac:16:2d:8d:46:d4
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:31339668 errors:0 dropped:0 overruns:0 frame:0
          TX packets:12623287 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:6723462539 (6.2 GiB)  TX bytes:2865776233 (2.6 GiB)
          Interrupt:32

eth4      Link encap:Ethernet  HWaddr 38:ea:a7:d3:4c:88
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2999671 errors:0 dropped:0 overruns:0 frame:0
          TX packets:39935274 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:219448245 (209.2 MiB)  TX bytes:57889742656 (53.9 GiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:15145 errors:0 dropped:0 overruns:0 frame:0
          TX packets:15145 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:15354906 (14.6 MiB)  TX bytes:15354906 (14.6 MiB)

tap302i0  Link encap:Ethernet  HWaddr 4e:dc:f3:49:de:bd
          inet6 addr: fe80::4cdc:f3ff:fe49:debd/64 Scope:Link
          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
          RX packets:14139 errors:0 dropped:0 overruns:0 frame:0
          TX packets:18399564 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:3715078 (3.5 MiB)  TX bytes:4345100642 (4.0 GiB)

tap302i1  Link encap:Ethernet  HWaddr f2:f6:a6:12:54:a7
          inet6 addr: fe80::f0f6:a6ff:fe12:54a7/64 Scope:Link
          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
          RX packets:87200 errors:0 dropped:0 overruns:0 frame:0
          TX packets:164703 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:816563820 (778.7 MiB)  TX bytes:32181455 (30.6 MiB)

tap812i0  Link encap:Ethernet  HWaddr 1e:de:14:7e:87:29
          inet6 addr: fe80::1cde:14ff:fe7e:8729/64 Scope:Link
          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
          RX packets:4156127 errors:0 dropped:0 overruns:0 frame:0
          TX packets:21880619 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:810152250 (772.6 MiB)  TX bytes:4859476918 (4.5 GiB)

tap814i0  Link encap:Ethernet  HWaddr 22:04:83:4a:34:05
          inet6 addr: fe80::2004:83ff:fe4a:3405/64 Scope:Link
          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
          RX packets:3066384 errors:0 dropped:0 overruns:0 frame:0
          TX packets:21302049 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:700230678 (667.7 MiB)  TX bytes:4792241158 (4.4 GiB)

venet0    Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
          inet6 addr: fe80::1/128 Scope:Link
          UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:3 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

vmbr0     Link encap:Ethernet  HWaddr ac:16:2d:8d:46:d4
          inet addr:10.31.2.240  Bcast:10.31.3.255  Mask:255.255.252.0
          inet6 addr: fe80::ae16:2dff:fe8d:46d4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:9742248 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6414574 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1419740613 (1.3 GiB)  TX bytes:1389278207 (1.2 GiB)

vmbr10    Link encap:Ethernet  HWaddr 38:ea:a7:d3:4c:88
          inet addr:10.31.75.240  Bcast:10.31.75.255  Mask:255.255.255.0
          inet6 addr: fe80::3aea:a7ff:fed3:4c88/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2843539 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2618277 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:148555903 (141.6 MiB)  TX bytes:57054269848 (53.1 GiB)



Code:
ProxMox2 to the Physical Server using the 1Gbps - OK :

root@MSDA-ProxMox2:~# iperf -c 10.31.2.249 -P 8
------------------------------------------------------------
Client connecting to 10.31.2.249, TCP port 5001
TCP window size: 23.8 KByte (default)
------------------------------------------------------------
[  8] local 10.31.2.240 port 54518 connected with 10.31.2.249 port 5001
[  9] local 10.31.2.240 port 54517 connected with 10.31.2.249 port 5001
[  4] local 10.31.2.240 port 54516 connected with 10.31.2.249 port 5001
[  3] local 10.31.2.240 port 54512 connected with 10.31.2.249 port 5001
[  5] local 10.31.2.240 port 54513 connected with 10.31.2.249 port 5001
[  6] local 10.31.2.240 port 54514 connected with 10.31.2.249 port 5001
[  7] local 10.31.2.240 port 54515 connected with 10.31.2.249 port 5001
[ 10] local 10.31.2.240 port 54519 connected with 10.31.2.249 port 5001
[ ID] Interval       Transfer     Bandwidth
[  9]  0.0- 7.0 sec  98.0 MBytes   117 Mbits/sec
[  4]  0.0- 8.0 sec   110 MBytes   116 Mbits/sec
[  5]  0.0- 8.0 sec   112 MBytes   118 Mbits/sec
[  6]  0.0- 8.0 sec   132 MBytes   139 Mbits/sec
[  3]  0.0- 8.0 sec   114 MBytes   119 Mbits/sec
[  7]  0.0- 8.0 sec   103 MBytes   108 Mbits/sec
[ 10]  0.0-10.0 sec   236 MBytes   198 Mbits/sec
[  8]  0.0-10.0 sec   216 MBytes   182 Mbits/sec
[SUM]  0.0-10.0 sec  1.10 GBytes   941 Mbits/sec

Code:
ProxMox2 to the Physical server using 10Gbps - OK
root@MSDA-ProxMox2:~# iperf -c 10.31.75.4 -P 8
------------------------------------------------------------
Client connecting to 10.31.75.4, TCP port 5001
TCP window size: 23.8 KByte (default)
------------------------------------------------------------
[  9] local 10.31.75.240 port 54551 connected with 10.31.75.4 port 5001
[  5] local 10.31.75.240 port 54545 connected with 10.31.75.4 port 5001
[  3] local 10.31.75.240 port 54544 connected with 10.31.75.4 port 5001
[  8] local 10.31.75.240 port 54549 connected with 10.31.75.4 port 5001
[  6] local 10.31.75.240 port 54547 connected with 10.31.75.4 port 5001
[  7] local 10.31.75.240 port 54548 connected with 10.31.75.4 port 5001
[ 10] local 10.31.75.240 port 54550 connected with 10.31.75.4 port 5001
[  4] local 10.31.75.240 port 54546 connected with 10.31.75.4 port 5001
[ ID] Interval       Transfer     Bandwidth
[  9]  0.0-10.0 sec  1.32 GBytes  1.13 Gbits/sec
[  5]  0.0-10.0 sec  1.32 GBytes  1.14 Gbits/sec
[  3]  0.0-10.0 sec  1.39 GBytes  1.20 Gbits/sec
[  8]  0.0-10.0 sec  1.36 GBytes  1.17 Gbits/sec
[  6]  0.0-10.0 sec  1.39 GBytes  1.19 Gbits/sec
[  7]  0.0-10.0 sec  1.39 GBytes  1.20 Gbits/sec
[ 10]  0.0-10.0 sec  1.39 GBytes  1.19 Gbits/sec
[  4]  0.0-10.0 sec  1.36 GBytes  1.17 Gbits/sec
[SUM]  0.0-10.0 sec  10.9 GBytes  9.38 Gbits/sec

Code:
VM using the 1Gbps Interface - OK 
D:\iperf-2.0.5-2-win32>iperf.exe -c 10.31.2.249 -P 8
------------------------------------------------------------
Client connecting to 10.31.2.249, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[  3] local 10.31.2.221 port 49999 connected with 10.31.2.249 port 5001
[  6] local 10.31.2.221 port 50002 connected with 10.31.2.249 port 5001
[ 10] local 10.31.2.221 port 50006 connected with 10.31.2.249 port 5001
[  9] local 10.31.2.221 port 50005 connected with 10.31.2.249 port 5001
[  8] local 10.31.2.221 port 50004 connected with 10.31.2.249 port 5001
[  7] local 10.31.2.221 port 50003 connected with 10.31.2.249 port 5001
[  5] local 10.31.2.221 port 50001 connected with 10.31.2.249 port 5001
[  4] local 10.31.2.221 port 50000 connected with 10.31.2.249 port 5001
[ ID] Interval       Transfer     Bandwidth
[  9]  0.0- 7.0 sec   111 MBytes   132 Mbits/sec
[  5]  0.0- 7.0 sec   103 MBytes   123 Mbits/sec
[  4]  0.0- 7.0 sec  96.5 MBytes   115 Mbits/sec
[  6]  0.0- 7.1 sec  94.6 MBytes   112 Mbits/sec
[ 10]  0.0- 7.0 sec  98.9 MBytes   118 Mbits/sec
[  7]  0.0- 7.1 sec   101 MBytes   120 Mbits/sec
[  8]  0.0- 7.2 sec   100 MBytes   117 Mbits/sec
[  3]  0.0-10.0 sec   314 MBytes   263 Mbits/sec
[SUM]  0.0-10.0 sec  1018 MBytes   853 Mbits/sec

Code:
VM using the 10Gbps Interface - BAD
D:\iperf-2.0.5-2-win32>iperf.exe -c 10.31.75.4 -P 8
------------------------------------------------------------
Client connecting to 10.31.75.4, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 10] local 10.31.75.221 port 50014 connected with 10.31.75.4 port 5001
[  4] local 10.31.75.221 port 50008 connected with 10.31.75.4 port 5001
[  9] local 10.31.75.221 port 50013 connected with 10.31.75.4 port 5001
[  7] local 10.31.75.221 port 50011 connected with 10.31.75.4 port 5001
[  3] local 10.31.75.221 port 50007 connected with 10.31.75.4 port 5001
[  8] local 10.31.75.221 port 50012 connected with 10.31.75.4 port 5001
[  6] local 10.31.75.221 port 50010 connected with 10.31.75.4 port 5001
[  5] local 10.31.75.221 port 50009 connected with 10.31.75.4 port 5001
[ ID] Interval       Transfer     Bandwidth
[  8]  0.0-10.1 sec  21.2 MBytes  17.7 Mbits/sec
[  3]  0.0-10.1 sec  40.6 MBytes  33.8 Mbits/sec
[  5]  0.0-10.1 sec  26.1 MBytes  21.7 Mbits/sec
[ 10]  0.0-10.1 sec  24.1 MBytes  20.0 Mbits/sec
[  9]  0.0-10.1 sec  30.0 MBytes  24.9 Mbits/sec
[  4]  0.0-10.2 sec  25.0 MBytes  20.6 Mbits/sec
[  7]  0.0-10.2 sec  17.8 MBytes  14.7 Mbits/sec
[  6]  0.0-10.2 sec  31.0 MBytes  25.6 Mbits/sec
[SUM]  0.0-10.2 sec   216 MBytes   178 Mbits/sec

tracert 10.31.75.4

 Tracing route to TEST-SERVER [10.31.75.4]
over a maximum of 30 hops:

   1    <1 ms    <1 ms    <1 ms  TEST-SERVER [10.31.75.4]

Well I guess there is something wrong with my Bridge configuration but it really looks normal to me:
pm2Bridges.png
vm1network.png

Help anyone?

PS. Sorry for the really long post... :/
 

Attachments

  • pm2Bridges.png
    pm2Bridges.png
    6.9 KB · Views: 60
Hi,
thanks for the help so far!

I just checked that - lro is off on the interfaces and on the bridges as well:

Well working bridge(1Gbps):
Code:
root@MSDA-ProxMox2:~# ethtool -k eth0
Features for eth0:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: on
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off
receive-hashing: off
root@MSDA-ProxMox2:~# ethtool -k vmbr0
Features for vmbr0:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: on
udp-fragmentation-offload: on
generic-segmentation-offload: on
generic-receive-offload: off
large-receive-offload: off
rx-vlan-offload: off
tx-vlan-offload: off
ntuple-filters: off
receive-hashing: off

Bad working bridge(10Gbps):
Code:
root@MSDA-ProxMox2:~# ethtool -k eth4
Features for eth4:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: on
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off
receive-hashing: on
root@MSDA-ProxMox2:~# ethtool -k vmbr10
Features for vmbr10:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: on
udp-fragmentation-offload: on
generic-segmentation-offload: on
generic-receive-offload: off
large-receive-offload: off
rx-vlan-offload: off
tx-vlan-offload: off
ntuple-filters: off
receive-hashing: off

The only difference I see is between eth0 and eth4 - receive-hashing is on on Eth4. However when I try to change it says:
Code:
root@MSDA-ProxMox2:~# ethtool -K eth4 rxhash off
Cannot set device flag settings: Operation not permitted

I should probably clarify that the 1Gbps bridge was created automatically during the OS setup, and the 10Gbps bridge I did my self via the ProxMox Web Interface.
 
Last edited:
can you test with kernel 3.10 to see if it's a kernel bug ?
Hi Spirit,

which one do you call 3.10? I currently have this:
ProxMox Machine revisions:
Code:
root@MSDA-ProxMox2:~# pveversion -v
proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-23-pve)
pve-manager: 3.1-3 (running version: 3.1-3/dc0e9b0e)
pve-kernel-2.6.32-23-pve: 2.6.32-109

Isn`t this 3.1 already?
 
Hi Spirit,

which one do you call 3.10? I currently have this:
ProxMox Machine revisions:
Code:
root@MSDA-ProxMox2:~# pveversion -v
proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-23-pve)
pve-manager: 3.1-3 (running version: 3.1-3/dc0e9b0e)
pve-kernel-2.6.32-23-pve: 2.6.32-109

Isn`t this 3.1 already?

no, pve-kernel-2.6.32-23-pve (2.6.32)


#apt-get install pve-kernel-3.10.0-2-pve

(openvz is not yet available in 3.10)
 
Sorry I am getting a bit confused here. You are proposing that I use the pve-kernel-3.10.0-2.pve. However looking at the https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_3.2 - the latest version of PVE it says pve-kernel-2.6.32-27-pve: 2.6.32-121.

How safe it is to go to that kernel?


it's for test only currently (this is the kernel of rhel7 rc1).
(But it's already stable from my tests).

you can install both kernels, and switch them from /boot/grub/grub.conf if you want.
 
Hi all,

10x for the help so far. I am definitely struggling with this problem for more than a week now and since I can`t connect my VMs to the 10Gbps Storage network I am completely stuck. So my next step would be to reinstall the whole Hypervisor and start playing with different kernels. So far I tried all possible non-destructive tweaks, but the 10Gbps connections is restricted at no more than 250Mbps. There is a great chance the driver or the 10G card is not good enough for Virtualization, but before going there I wanted to ask you something that I just took for granted:


Is there anyone that you know of, that have successfully configured and used 10Gbps network in a ProxMox KVM Guest with normal 10G performance?
 
Hi,
nearly half ;-)
Code:
[ ID] Interval       Transfer     Bandwidth
[  6]  0.0-10.0 sec   766 MBytes   643 Mbits/sec
[ 10]  0.0-10.0 sec   534 MBytes   448 Mbits/sec
[  9]  0.0-10.0 sec   923 MBytes   774 Mbits/sec
[  4]  0.0-10.0 sec   542 MBytes   455 Mbits/sec
[  5]  0.0-10.0 sec   807 MBytes   677 Mbits/sec
[  7]  0.0-10.0 sec  1.19 GBytes  1.02 Gbits/sec
[  8]  0.0-10.0 sec  1.03 GBytes   888 Mbits/sec
[  3]  0.0-10.0 sec   904 MBytes   758 Mbits/sec
[SUM]  0.0-10.0 sec  6.60 GBytes  5.66 Gbits/sec
between two linux-vms on two different pve-hosts (10GB-nic with vlan-tagging).

Test with linux-vm (virtio) instead of windows first.

What kind of nic and driver do you use? In my case it's an intel nic on both servers
Code:
lspci
...
03:00.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection (rev 01)
Udo
 
Hi guys, about windows, virtio driver is really slower than linux virtio driver.

I can reach around 1,5gbits in/out on win2008R2, but 1,5gbits in / 5gbits out on win2012R2. (with tso enabled inside vm, and host kernel 3.10).
and around 9Gbit/s on linux guest.

On my linux guest test, the bottleneck is cpu (1core 100%), because by default vhost-net we can use more than 1core by nic.
 
Hi all,

thanks for all the help so far!

I did several more tests to confirm the situation:
1. Speed between the VM1 and PM2 - 10Gbps, Speed between the PM2 and the PS1 10Gbps, Speed VM1 and PS1 - 100-250Mbps. Obviously there is something wrong with the bridge or maybe the combination between the bridge and the Windows VirtIO driver.
2. I also tried with Intel E1000 driver on the VM1 - it works a bit faster, but still very slow - 250Mbps
3. Updated the VirtIO driver of the VM to the latest version - 1.74
4. Tried exactly the same setup with a Linux VM (VM2) - no problems at all - all connections run at 10Gbps, and more importantly lower latency and faster single connection speed.

My problem is that I need it for MS SQL server, so the Linux is not realy solving the problem... Any ideas for driver alternatives???

The card is
Code:
 Ethernet controller: Emulex Corporation OneConnect 10Gb NIC (rev 02)
 
Hi Krustaka

May be that help changing the MTU in the PVE Host and in the VM (Virtio-Net) to 9000 (9k), or the max value that your NICs and the Switch can support.

But if you find the solution, please let me know.
 
4. Tried exactly the same setup with a Linux VM (VM2) - no problems at all - all connections run at 10Gbps, and more importantly lower latency and faster single connection speed.

My problem is that I need it for MS SQL server, so the Linux is not realy solving the problem... Any ideas for driver alternatives???
.

Yes, windows driver is really slower than linux. Mainly for inbound traffic.
I don't know which windows version you use, but like I said before, I see a big improvement for inbound traffic, on windows 2012 + host kernel 3.10. (around 5git/s out).
and enabling tso inside windows guest virtio network card properties

Also, iperf under windows seem to be buggy too,
I have used netperf tool to reach theses values.


I think you should try to do benchmark from

guest->host
host->guest

to see performance in|out of the guest.
 
I have used netperf tool to reach theses values.

Hi Spirit (the master of masters)

May be that NTttcp ( of M$ ) be better:
http://gallery.technet.microsoft.com/NTttcp-Version-528-Now-f8b12769

I think you should try to do benchmark from
guest->host
host->guest
to see performance in|out of the guest.

Would not it be best to test it?:

Windows guest ---> Bridge on a PVE host ---> Swtich 10 Gb/s ---> Bridge on other PVE host ---> Windows guest
(changing the MTU to max value that all the hardware can support, including: the switch, the NICs of the PVE Hosts and the Virtio-net drivers of the Windows guests)
 
Last edited:
Would not it be best to test it?:

Windows guest ---> Bridge on a PVE host ---> Swtich 10 Gb/s ---> Bridge on other PVE host ---> Windows guest
(changing the MTU to max value that all the hardware can support, including: the switch, the NICs of the PVE Hosts and the Virtio-net drivers of the Windows guests)

Yes, sure, but just try some internal test first (guest->host, host->guest, to be sure that the problem is not related to hardware switchs, but something with windows virtio driver.
 
Spirit,

this test is already done above - Check the 13th post.

I am now convinced the problem is in the Windows Driver but will still try the kernel options and reinstalling the whole server. I am just waiting to get physical access to the server, as I don`t want to be messing around with the OS without physical access to the machine.
 
Spirit,

this test is already done above - Check the 13th post.

I am now convinced the problem is in the Windows Driver but will still try the kernel options and reinstalling the whole server. I am just waiting to get physical access to the server, as I don`t want to be messing around with the OS without physical access to the machine.

Clearly, the windows driver is slower than linux. But you should be able to reach 1gbit/s without problem.
(and for outgoing traffic, windows 2012r2 have big improvement with lso support, I reach around 5gbit/s)

But also, iperf is really slower on windows (because of cygwin I think).
can you try to test with netperf, from host->windows vm and windowsvm-> host ?

a netperf build for windows is available in this iso.
http://lmr.fedorapeople.org/winutils/winutils.iso
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!