Bonding, bandwidth and HCA adapters

bsnipes

Member
Mar 23, 2010
19
2
23
While setting up bonding on my OpenFiler box I came across one small reference on the Internet that surprised me. I had always been under the assumption that when you bonded together ports using 802.3ad, it was (to use an analogy) basically like switching to a larger water pipe from a smaller one. If my storage server and the connecting server both had 2 bonded NICS each, I assumed I could get 2 Gbps between them. What I discovered is that if they each had only one IP address (which my OpenFiler and my Proxmox servers do) then once the initial path is chosen between the two endpoints and it starts using that single path each time. Meaning the same NIC and the traffic doesn't hop from NIC to NIC to go out the bond but is tied to a single 1Gbps link. My Proxmox accesses the OpenFiler using NFS to one IP, so I am limited to 1 Gbps between them no matter how many NICs I add to my OpenFiler or PM box unless I add multiple IP addresses to each side and cross connect. I realize that more than 2 1Gbps NICs going full blast would probably overload my storage system, but for bursts I don't want to be tied down.

It appears that the only Linux bonding mode that allows for striping the data across multiple NICs is balance-rr - http://www.linuxfoundation.org/coll...ing_Mode_Selection_for_Single_Switch_Topology . I haven't tried this so have no clue as to how well it performs.

I can see why 10 Gbps NICs would be a good thing. I came across the following during my research and would also like to know if anyone has tried using HCA adapters between their proxmox server and storage device: http://davidhunt.ie/wp/?p=232 If you have tried this, what were the results? This could be a good cost-effective alternative since 10 Gbps ethernet cards and switch modules are way too expensive ATM.

Reference info for the research I did:
http://davidhunt.ie/wp/?p=232
http://davidhunt.ie/wp/?p=375
http://blog.scottlowe.org/2009/07/01/republished-dispelling-some-vmware-over-nfs-myths/
http://virtualgeek.typepad.com/virt...lp-our-mutual-nfs-customers-using-vmware.html
 
Last edited:
While setting up bonding on my OpenFiler box I came across one small reference on the Internet that surprised me. I had always been under the assumption that when you bonded together ports using 802.3ad, it was (to use an analogy) basically like switching to a larger water pipe from a smaller one. If my storage server and the connecting server both had 2 bonded NICS each, I assumed I could get 2 Gbps between them. What I discovered is that if they each had only one IP address (which my OpenFiler and my Proxmox servers do) then once the initial path is chosen between the two endpoints and it starts using that single path each time.
...

Yes, I also use 802.3ad (or mode4 or LACP) with an matching (D-Link) switch. And I'm happy with it.
But not like you think with your water pipe. From host to host you've only ever a maximum of one 1Gbit connection. So you have no packet mis-ordering.
But connect 2, 10 or even 100 hosts with one server, then I've got two 1Gbit connections in parallel. That is one point.
The other issue is the reliability. Failure of one connection or NIC goes my network continues to function.
I think if you really want power in the network, you do not get around to 10Gb ... ;-)

http://www.cyberciti.biz/howto/question/static/linux-ethernet-bonding-driver-howto.php
http://www.linuxfoundation.org/collaborate/workgroups/networking/bonding
 
The other issue is the reliability. Failure of one connection or NIC goes my network continues to function.
I think if you really want power in the network, you do not get around to 10Gb ... ;-)

How often have you seen a NIC fail though? I've been networking for about 13 years and unless there is a power spike that gets on the ethernet wire, I think I've only seen a couple fail (NE2000 NICs). One cool thing about those HCA adapters is that you can get them in 10 Gbps speeds for a fraction of the cost of 10Gb ethernet equipment. I'm thinking personally of a dual-port 4Gbps HCA in my OpenFiler box and a 4 Gbps in each of my Proxmox servers. I know the RAID subsystem won't handle that much throughput but my limitation won't be on the wire. If I'm getting it right, the whole setup could be had for under $300 on eBay.
In my case PM accesses OpenFiler via NFS for all the kvm image files.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!