Unstable CWND from FreeNAS to Proxmox - Performance issue

Mujizac

Member
May 23, 2017
11
0
6
42
Hi. I have edited my topic and added more information in an additional post. Hopefully it will be more to go on. Leaving my original post here:
Greetings folks. I've got a strange issue and I've been pulling my hair out:
My 10gbe performace is mostly good. If I test one of my proxmox machines to another proxmox machine (or say ubuntu live) with iperf3, I consistantly get 9gbit or better. either direction (sending or recieving). I have 3 Freenas (FreeBSD) boxes, and those test great between themselves as well. And lastly, if I'm sending data from proxmox to freenas, I also get great performance. 9gbit or better.
The problem comes when I'm sending data from freenas to proxmox, I get really jittery performance. I can average anywhere between 4gbit-6gbit. Sometimes, for 1 second it will spike to 8gbit. But it's seriously all over the place. Most 30 second tests end up averaging 4gbit.

I am running the latest proxmox for unlicensed, and running the latest freenas. My two proxmox testers are different hardware configurations, as are my two freenas. I've tried a different switch, in fact, I have tried directly connecting two machines together and testing without the switch.

All related network cards are intel based. On both proxmox machines they are built onto the motherboard. On one freenas it's a pci card.

I've tried a variety of tuning options with sysctl on both freenas and proxmox.

I'm afraid I may be at my end for finding a solution. This has been a tough one. In fact I'm unsure what other information may be of use at the moment!

Edit: I think I will add that the network cards in question are completely not part of any linux bridge or anything like that. Just directly configured. There is no LACP or any bonds (at the moment, I got rid of those for testing) Storage network is 100% isolated and not part of my LAN for workstations.
 
Last edited:
* Any errors on the interfaces (both in freenas and PVE)?
* any other information you can get with ethtool -S (on PVE)
* try running tcpdump on both sides and look at the dump file with wireshark - maybe you see something different between
pve -> freenas and freenas -> pve

I hope this helps!
 
This is a paste from another forum where I posted better information about my issue. I hope this helps me figure out what is going on.

Hi folks! I'm neck deep in a network performance tuning issue, and I'm hopeful that perhaps someone will have some insight. The overall goal is technically to optimize my 10gbe interfaces for optimal throughput, though the condition that I have noticed I believe it was it causing such odd results.

My setups:
I have 6 different freenas machines interfacing to 6 different proxmox machines split between two sites. The first site is in production (unfortunately) and does exhibit the problems that I'm seeing (low network performance). My second site is in development still, we have not brought it online yet, so I can easily make changes on multiple systems at a whim (Woohoo!). Within this second site, I have two specific machines that I am focusing on. Both started with fresh installations of the latest Freenas and Proxmox respectively. This is basically my test bed within the development site. On Freenas there is no defined storage, no NFS shares, no iScsi. On proxmox, there are no virtual machines, and the NIC's in question do not operate as bridges. They are directly defined in interfaces. There is no traffic beyond the traffic that I create while testing.
The hardware:
All systems are using Intel network cards of various flavors. All of the servers themselves are on Xeon CPU's with a minimum of 32gb of memory.
I'm using Netgear M4300-8X8F switches. **Currently not running LACP**, I have disabled this during testing. I'm only using a single switch at a time. I have replicated the issue on two different switches. The Netgear switches are at most current firmware as of 2 weeks ago. I have flow control enabled symmetrically on the switch ports.

Some Tests:
Tests have been performed with iperf3.
First a good test. Freenas running the iperf3 server (iperf3 -s), Proxmox running the client ( iperf3 -c 10.200.108.65):
Hi folks! I'm neck deep in a network performance tuning issue, and I'm hopeful that perhaps someone will have some insight. The overall goal is technically to optimize my 10gbe interfaces for optimal throughput, though the condition that I have noticed I believe it was it causing such odd results.

My setups:
I have 6 different freenas machines interfacing to 6 different proxmox machines split between two sites. The first site is in production (unfortunately) and does exhibit the problems that I'm seeing (low network performance). My second site is in development still, we have not brought it online yet, so I can easily make changes on multiple systems at a whim (Woohoo!). Within this second site, I have two specific machines that I am focusing on. Both started with fresh installations of the latest Freenas and Proxmox respectively. This is basically my test bed within the development site. On Freenas there is no defined storage, no NFS shares, no iscsi. On proxmox, there are no virtual machines, and the nic's in question do not operate as bridges. They are directly defined in interfaces. There is no traffic beyond the traffic that I create while testing.
The hardware:
All systems are using Intel network cards of various flavors. All of the servers themselves are on Xeon CPU's with a minimum of 32gb of memory.
I'm using netgear M4300-8X8F switches. **Currently not running LACP**, I have disabled this during testing. I'm only using a single switch at a time. I have replicated the issue on two different switches. The netgears are at most current firmware as of 2 weeks ago. I have flow control enabled symmetrically on the switch ports.

Some Tests:
All tests have been performed with iperf3.
First a good test:
Freenas running the iperf3 server (iperf3 -s), Proxmox running the client ( iperf3 -c 10.200.108.65):

Connecting to host 10.200.108.65, port 5201
[ 5] local 10.200.108.45 port 44200 connected to 10.200.108.65 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1.09 GBytes 9.40 Gbits/sec 0 1.28 MBytes
[ 5] 1.00-2.00 sec 1.09 GBytes 9.38 Gbits/sec 0 1.28 MBytes
[ 5] 2.00-3.00 sec 1.09 GBytes 9.38 Gbits/sec 0 1.28 MBytes
[ 5] 3.00-4.00 sec 1.09 GBytes 9.38 Gbits/sec 0 1.28 MBytes
[ 5] 4.00-5.00 sec 1.09 GBytes 9.38 Gbits/sec 0 1.28 MBytes
[ 5] 5.00-6.00 sec 1.09 GBytes 9.38 Gbits/sec 0 1.28 MBytes
[ 5] 6.00-7.00 sec 1.09 GBytes 9.38 Gbits/sec 0 1.28 MBytes
[ 5] 7.00-8.00 sec 1.09 GBytes 9.38 Gbits/sec 0 1.28 MBytes
[ 5] 8.00-9.00 sec 1.09 GBytes 9.38 Gbits/sec 0 1.28 MBytes
[ 5] 9.00-10.00 sec 1.09 GBytes 9.38 Gbits/sec 0 1.28 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 10.9 GBytes 9.38 Gbits/sec 0 sender
[ 5] 0.00-10.00 sec 10.9 GBytes 9.38 Gbits/sec receiver

A decent test for sure. The connection speed is stable, there were no retries and the CWND stays stable and consistent.

Now, let's try Freenas to Freenas. In this case I will use my test bed (10.200.108.65) as the client (it was the server in the first) and a second freenas machine as the server:

Connecting to host 10.200.108.15, port 5201
[ 5] local 10.200.108.65 port 44268 connected to 10.200.108.15 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 989 MBytes 8.28 Gbits/sec 0 1.62 MBytes
[ 5] 1.00-2.00 sec 1.09 GBytes 9.37 Gbits/sec 0 1.62 MBytes
[ 5] 2.00-3.00 sec 1.09 GBytes 9.38 Gbits/sec 0 1.62 MBytes
[ 5] 3.00-4.00 sec 1.09 GBytes 9.38 Gbits/sec 0 1.62 MBytes
[ 5] 4.00-5.00 sec 1.09 GBytes 9.38 Gbits/sec 0 1.62 MBytes
[ 5] 5.00-6.00 sec 1.09 GBytes 9.39 Gbits/sec 0 1.62 MBytes
[ 5] 6.00-7.00 sec 1.09 GBytes 9.38 Gbits/sec 0 1.62 MBytes
[ 5] 7.00-8.00 sec 1.09 GBytes 9.38 Gbits/sec 0 1.62 MBytes
[ 5] 8.00-9.00 sec 1.09 GBytes 9.38 Gbits/sec 0 1.62 MBytes
[ 5] 9.00-10.00 sec 1.09 GBytes 9.39 Gbits/sec 0 1.62 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 10.8 GBytes 9.27 Gbits/sec 0 sender
[ 5] 0.00-10.00 sec 10.8 GBytes 9.27 Gbits/sec receiver

Pretty decent as well on this test. I'm great with anything 9+ Gbits when it is this stable. No retries, consistent CWND.

And now let me break things. This will have the same client running Freenas (10.200.108.65) and we will use the Proxmox machine from the first test as the server (10.200.108.45)

Connecting to host 10.200.108.45, port 5201
[ 5] local 10.200.108.65 port 44270 connected to 10.200.108.45 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 557 MBytes 4.67 Gbits/sec 1 204 KBytes
[ 5] 1.00-2.00 sec 873 MBytes 7.32 Gbits/sec 0 285 KBytes
[ 5] 2.00-3.00 sec 1.01 GBytes 8.65 Gbits/sec 1 82.7 KBytes
[ 5] 3.00-4.00 sec 809 MBytes 6.78 Gbits/sec 0 262 KBytes
[ 5] 4.00-5.00 sec 997 MBytes 8.37 Gbits/sec 0 331 KBytes
[ 5] 5.00-6.00 sec 1.00 GBytes 8.60 Gbits/sec 1 204 KBytes
[ 5] 6.00-7.00 sec 872 MBytes 7.31 Gbits/sec 0 285 KBytes
[ 5] 7.00-8.00 sec 1.02 GBytes 8.72 Gbits/sec 0 348 KBytes
[ 5] 8.00-9.00 sec 835 MBytes 7.01 Gbits/sec 1 257 KBytes
[ 5] 9.00-10.00 sec 718 MBytes 6.02 Gbits/sec 1 230 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 8.55 GBytes 7.35 Gbits/sec 5 sender
[ 5] 0.00-10.15 sec 8.55 GBytes 7.24 Gbits/sec receiver

Phew. This is really bad for me since stable and reliable transfer is going to be super important once I run some storage on this network. My issues with this test are the low speed, the high amount of fluctuation in speed, the amount of retries (yes i know it's only 5) along with the unstable CWND.
In my research, I've tried a lot of different sysctl tunables for both Proxmox and Freenas with no resultant better or more stable performance. Though the tests above are nearly clean and without any tuneables. Certainly out of box my results are as bad if not worse.

It's my belief that for whatever reason, when Freenas is sending data over to Proxmox (Debian really), they are unable to agree on stable buffer settings, and this is causing the issues with the congestion window (CWND) fluctuating greatly. I believe that the retries are coming from CWND going up and down and perhaps a "full" window not allowing a packet in.
 
Hello,
i am having a very similar or possibly even identical issue, which i described in detail here: https://www.ixsystems.com/community/threads/10gbe-performance-issue-in-one-direction.85552/. I managed to rule out all hardware issues and came to the same conclusion that it has something to do with Freenas' networking or the combination of Freenas and Proxmox/Debian. IDK.

I never thought it could be a operating system related issue, because I thought those negotiations around the CWND are defined by TCP and not by the OS.

If anyone finds any insights on this please post.
Thank you so much.
Greetings
mimesot
 
Hi,

Maybe is only some bad drivers performance. I would try the same tests using some linux VM and see if the performance/ tests results are the same. Also I youd try on the same host the same tests betheen pmx host and a free-nas VM, where any switch is out of the question!
 
Mimesot, reading your post on the Freenas forum, I do think we are battling the same thing. I have indeed been able to tune this for better (acceptable) performance on both the proxmox and Freenas side. I want to say that they key setting was on the Freenas side, but that’s a lifetime ago it feels. I will refer to my notes and post up my settings.
Are you on 10gbe? If not be advised that certain Sysctl items have different syntax.
I’ll try my best to post back within 24 hours.
 
I'm hopeful this won't be to raw information wise. I have found personally that with this issue the "HERE TRY ALL THESE SETTINGS" method hasn't really worked out, but that said, here is all my settings! :) I found that it was important to understand all of them, however I may not be able to explain all of them at this time nor my full logic. I can probably link to some of the reference material.
First, it is my belief that what sealed the deal for me was to disable Large Receive Offload (LRO) on the debain side. My interfaces file looks something like this for my 10gbe interface:
Code:
auto eno1
allow-hotplug eno1
iface eno1 inet static
        address 10.200.108.55
        netmask 255.255.255.0
        post-up ethtool -K eno1 lro off

I'm using post-up, but there are other ways. I'm unaware of the pros or cons to other methods, and this seemed to work for me. If it was me, and I was retesting, I would do that alone and see what difference it makes. It might be worth mentioning that I'm not tweaking the MTU, as far as I can see. I think I settled on 1500 and not worrying about jumbo frames.

Okay, next is my sysctl.conf for debian:
Code:
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 16777216
net.core.wmem_default = 16777216
net.ipv4.tcp_rmem = "4096 87380 16777216"
net.ipv4.tcp_wmem = "4096 87380 16777216"
net.ipv4.tcp_mem = "1638400 1638400 1638400"
net.ipv4.tcp_sack = 0
net.ipv4.tcp_dsack = 0
net.ipv4.tcp_fack = 0
net.ipv4.tcp_slow_start_after_idle = 0


And then here are my Freenas Tunables:
VariableValueType
cc_cubic_loadYESloader
cc_htcp_load YESloader
kern.ipc.maxsockbuf16777216sysctl
net.inet.tcp.abc_l_var44sysctl
net.inet.tcp.delayed_ack0sysctl
net.inet.tcp.initcwnd_segments44sysctl
net.inet.tcp.minmss 536sysctl
net.inet.tcp.mssdflt1448sysctl
net.inet.tcp.recvbuf_inc65536sysctl
net.inet.tcp.recvbuf_max16777216sysctl
net.inet.tcp.recvspace1048576sysctl
net.inet.tcp.sack.enable0sysctl
net.inet.tcp.sendbuf_inc65536sysctl
net.inet.tcp.sendbuf_max16777216sysctl
net.inet.tcp.sendspace1048576sysctl
net.link.ifqmaxlen2048loader

For Freenas (FreeBSD) I can tell you that I referred to this page a lot:
https://calomel.org/freebsd_network_tuning.html
 
BTW, I have had this performance issue on both fiber and copper, on multiple different systems, interfaces, os's etc. Technically never tried a different linux flavor than debian/ubuntu however. Might be interesting to see what CentOS or Fedora do in terms of performance along with freebsd.
I also never tried a freebsd outside of freenas. It just seemed like a nonstarter to try outside of that context.
 
Maybe is only some bad drivers performance. I would try the same tests using some linux VM and see if the performance/ tests results are the same. Also I youd try on the same host the same tests betheen pmx host and a free-nas VM, where any switch is out of the question!

Hi,
Thank you for your reply,
unfortunately i am not talking about VMs at all. I have a bare metal installed FreeNAS and I am sending data to a Proxmox-Host, also on bare metal.
I am pretty convinced, that the current install of FreeNAS comes with proper drivers for Intel NICs.

I also replaced the Freenas-OS SSD with one with Debian 10 in order to rule out any Cabeling, NIC, Switch etc. Issues. Debian 10 to Proxmox works fine (9.3 Gbit/s in both directions using iperf3)

I clould indeed install FreeNAS inside a VM for testing purposes, but if i do so, dies FreeNAS even utilize the NIC or is the transfer just internally than over the vmbr0 Virtual Switch?

Thanks and Greetings
mimesot
 
Mimesot, reading your post on the Freenas forum, I do think we are battling the same thing. I have indeed been able to tune this for better (acceptable) performance on both the proxmox and Freenas side. I want to say that they key setting was on the Freenas side, but that’s a lifetime ago it feels. I will refer to my notes and post up my settings.
Are you on 10gbe? If not be advised that certain Sysctl items have different syntax.
I’ll try my best to post back within 24 hours.

Hi,
thank you for your huge effort. I will work through your texts as soon as i have got time. We might also have different issues, yeah, you might be right. I just tried FreeNAS to bare Metal Windows 10 and had the same ridiculuos 200MB/s over 10GbE. So its surely a FreeNAS issue, isn't it?
Greeting
mimesot
 
I have used ubuntu live CD images to test "different os" on both my freenas equipment and proxmox equipment and I have had the same results as your tests: full 10gbe performance, both directions. I used that type of test to boot out the idea that there was a cable issue.
Regarding linux to windows 10 iperf3 performance.... it's a mess. It's possible you will have better results with iperf vs iperf3, I cannot recall.
I think that there are magic switches you can use with iperf3 to do a true and proper test. I think that the simple defaults won't maximize the 10gbe connection. You might consider running multiple threads, although that test can be a bit dubious. Great for making sure the cable has the capacity, but bad for finding out if the buffers and other settings are right. In the end it might be moot anyways because you won't be running windows 10 like that.
 
I think that throwing freenas into a virtual machine is just going to be more of a headache. I'm aware that on pFsense (also FreeBSD) you have to disable hardware checksum for the virtio NIC. Beyond that I'm unsure how it really handles the 10gbe, I've never cared about peak performance from my pFsense installations as usually those are just running voip phones and not pushing the limits of the tech by any stretch.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!