How to properly tune 10Gbe card for best rates to iscsi SAN

OldSunGuy

New Member
Mar 23, 2012
12
0
1
Hi All,
Sorry if this is more of a straight Linux networking issue, but I am hoping to resolve this under Proxmox 2 so we can use it as our VM server.
The setup is Proxmox 2 server whose storage will be iscsi SAN based (Enhance 3160TG - using 10gbe fiber) . We have been testing and coming to the conclusion that there is asymmetric speed performance between Proxmox and SAN based on feedback from dd write/read tests and iperf and iozone utilities. (Unfortunately, Enhance SAN provides only a closed shell access, so no tuning is possible. Even MTU is limited to a maximum of 3500, so we turn it off.) All wired ethernet connections are on private networks, with direct fiber connections - no switches used. )

So in order to understand what is happening better, we setup a second 10 gbe system (using copper instead of fiber - Intel x520-t2 dual port cards. Cards support auto crossover so we use standard Cat 6 cables purchased from a vendor.) Here we run Proxmox to a Windows 2008 server ( because Enhance publishes numbers for 10 gbe benchmarks - iscsi to Windows numbers only.)

When we run iperf, we get asymmetric results, as follows:
Run# Server Client Rate Window Size Write Length
1 Linux Windows 1Gb/s not set - use defaults not set - use defaults
2 Windows Linux 4Gb/s not set - use defaults not set - use defaults
3 Linux Windows 4Gb/s 256K 256K
4 Windows Linux 7Gb/s 256K 256K


My questions are as follows:
What is (TCP) window size doing here in iperf to improve performance?
Is this behavior related to what I might be seeing when I am doing the Proxmox to SAN testing?
Is there a way to compensate via Proxmox/Linux NIC tuning to maximize my throughput to the SAN over iscsi disk for my Proxmox VM's?

Thanks for any insight to help me understand this performance asymmetry.