Hi,
I am experimenting with bonding in preparation for my next cluster build.
I benchmarked a 1gbe connection between the nodes and iperf showed close to 1gbit/s. Which is what I expected.
Then I bonded two 1gbe NICs together (balance-rr) and benchmarked again. Iperf only showed 1.5gbit/s. That is substantially slower than expected.
The big surprise came when I bonded three 1gbe NICs together. Iperf showed only 1.4gbit/s, i.e. less than for two NICs bonded.
Is that normal? Does bonding create that much overhead?
The 1gbe NICs are all on the same quad port card in each host. I am using CAT7 cables and identical switches.
But: the length of the cables is not the same (because I don't have enough of the same length). One connection is with only 1m cables. One connection is with only 2m cables and one connection is with a mix of 1m and 2m cables. Might that have to do with the problem? In other words: Would I get (near) doubled and (near) tripled speeds, if I used only cables of the same lenght?
Thanks!
I am experimenting with bonding in preparation for my next cluster build.
I benchmarked a 1gbe connection between the nodes and iperf showed close to 1gbit/s. Which is what I expected.
Then I bonded two 1gbe NICs together (balance-rr) and benchmarked again. Iperf only showed 1.5gbit/s. That is substantially slower than expected.
The big surprise came when I bonded three 1gbe NICs together. Iperf showed only 1.4gbit/s, i.e. less than for two NICs bonded.
Is that normal? Does bonding create that much overhead?
The 1gbe NICs are all on the same quad port card in each host. I am using CAT7 cables and identical switches.
But: the length of the cables is not the same (because I don't have enough of the same length). One connection is with only 1m cables. One connection is with only 2m cables and one connection is with a mix of 1m and 2m cables. Might that have to do with the problem? In other words: Would I get (near) doubled and (near) tripled speeds, if I used only cables of the same lenght?
Thanks!