While setting up bonding on my OpenFiler box I came across one small reference on the Internet that surprised me. I had always been under the assumption that when you bonded together ports using 802.3ad, it was (to use an analogy) basically like switching to a larger water pipe from a smaller one. If my storage server and the connecting server both had 2 bonded NICS each, I assumed I could get 2 Gbps between them. What I discovered is that if they each had only one IP address (which my OpenFiler and my Proxmox servers do) then once the initial path is chosen between the two endpoints and it starts using that single path each time. Meaning the same NIC and the traffic doesn't hop from NIC to NIC to go out the bond but is tied to a single 1Gbps link. My Proxmox accesses the OpenFiler using NFS to one IP, so I am limited to 1 Gbps between them no matter how many NICs I add to my OpenFiler or PM box unless I add multiple IP addresses to each side and cross connect. I realize that more than 2 1Gbps NICs going full blast would probably overload my storage system, but for bursts I don't want to be tied down.
It appears that the only Linux bonding mode that allows for striping the data across multiple NICs is balance-rr - http://www.linuxfoundation.org/coll...ing_Mode_Selection_for_Single_Switch_Topology . I haven't tried this so have no clue as to how well it performs.
I can see why 10 Gbps NICs would be a good thing. I came across the following during my research and would also like to know if anyone has tried using HCA adapters between their proxmox server and storage device: http://davidhunt.ie/wp/?p=232 If you have tried this, what were the results? This could be a good cost-effective alternative since 10 Gbps ethernet cards and switch modules are way too expensive ATM.
Reference info for the research I did:
http://davidhunt.ie/wp/?p=232
http://davidhunt.ie/wp/?p=375
http://blog.scottlowe.org/2009/07/01/republished-dispelling-some-vmware-over-nfs-myths/
http://virtualgeek.typepad.com/virt...lp-our-mutual-nfs-customers-using-vmware.html
It appears that the only Linux bonding mode that allows for striping the data across multiple NICs is balance-rr - http://www.linuxfoundation.org/coll...ing_Mode_Selection_for_Single_Switch_Topology . I haven't tried this so have no clue as to how well it performs.
I can see why 10 Gbps NICs would be a good thing. I came across the following during my research and would also like to know if anyone has tried using HCA adapters between their proxmox server and storage device: http://davidhunt.ie/wp/?p=232 If you have tried this, what were the results? This could be a good cost-effective alternative since 10 Gbps ethernet cards and switch modules are way too expensive ATM.
Reference info for the research I did:
http://davidhunt.ie/wp/?p=232
http://davidhunt.ie/wp/?p=375
http://blog.scottlowe.org/2009/07/01/republished-dispelling-some-vmware-over-nfs-myths/
http://virtualgeek.typepad.com/virt...lp-our-mutual-nfs-customers-using-vmware.html
Last edited: