Bonding causes VM networking to fail.

wipeout_dude

Member
Jul 15, 2012
39
0
6
I have been playing with network bonding over the last two days and the primary issue I am having is that when I use bonding the networks in the VM's fail.. They just won't connect to the network..

Here is my network config..

Code:
# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

auto bond0
iface bond0 inet manual
        slaves eth0 eth1
        bond_miimon 100
        bond_mode balance-rr

auto vmbr0
iface vmbr0 inet static
        address  192.168.0.251
        netmask  255.255.255.0
        gateway  192.168.0.1
        bridge_ports bond0
        bridge_stp off
        bridge_fd 0

Any ideas?

Secondary issues are the bonding mode..

When using 802.3ad the performance is terrible (95Mbps on a 2 x 1Gb link) and only seems to go through one of the network links.. That I am still playing with but so far balance-rr gets me the best raw performance of about 1.5Gbps..
 
When using 802.3ad the performance is terrible (95Mbps on a 2 x 1Gb link) and only seems to go through one of the network links..

That is expected, and the way 802.3ad works. But you can duplicate speed if you open 2 connections.
 
Hi.. I'm not getting 950Mbps on a 2x1Gbps link which I would expect to see.. I am getting 95Mbps, sometimes 140Mbps.. Still working on it to see if I can work it out..

Out of interest it seems that the VM's network failure had something to do with using the balance-rr mode on the bond.. Still testing to confirm that too..
 
802.3ad works here with the expected results (900 Mbps and more) take a look on your switch settings. you get the expected results without bonding?
 
For anyone finding this tread..

Don't use balance-rr for your bond.. Although it has the highest raw throughput because it uses the links simultaneously the VM's networking doesn't appear to like it very much..

In my testing balance-alb and balance-tlb gave intermittent connectivity issues.. 802.3ad worked with a stable connection but I couldn't get the throughput past that of a single link (~945Mbps), even when accessing it from two sources.. On my BSD box with an LACP bond through the same switch from two sources I can get up to 1.8Gbps..

#All throughput tests done with iperf#
 
For anyone finding this tread..

Don't use balance-rr for your bond.. Although it has the highest raw throughput because it uses the links simultaneously the VM's networking doesn't appear to like it very much..

In my testing balance-alb and balance-tlb gave intermittent connectivity issues.. 802.3ad worked with a stable connection but I couldn't get the throughput past that of a single link (~945Mbps), even when accessing it from two sources.. On my BSD box with an LACP bond through the same switch from two sources I can get up to 1.8Gbps..

#All throughput tests done with iperf#

balance-alb and balance-tlb use some arp tricks, so it's not very stable.
LACP on linux with bonding module can't do load balancing.

Redhat is working on a new project : libteam (nic teaming)
https://fedorahosted.org/libteam/wiki/CompareToBonding

seem to support loadbalancing on lacp :)