bonding + bridging freez

tex

New Member
Nov 24, 2009
6
0
1
Hi all,

I have 2 NIC eth0 and eth1 bonded in bond0 (I've try mode 0, 1, 6), then bridged in vmbr0.
Every vm (container) have a virtual eth bridged to vmbr0.
Often the network freezes for 1-2minutes, then it starts again working normally.
The only error msg is:
Nov 24 15:12:10 sun01 kernel: bond0: received packet with own address as source address
I've try changing the bonding mode, but nothing happens...

Any suggestions?

Thanks
Crtomir

P.S.: sorry for my horrible english
 
I have installed the last available kernel but nothing heppens

'Linux sun01 2.6.24-9-pve #1 SMP PREEMPT Tue Nov 17 09:34:41 CET 2009 x86_64 GNU/Linux'
 
Maybe you should enable spanning tree protocol on the bridge ('bridge_stp on") - not sure.
 
with stp on the error message (received packet with own address as source address) is continuous.
 
I'm having the same problem.
This is my /etc/network/interfaces
Code:
auto lo
iface lo inet loopback
auto bond0
iface bond0 inet static
       address   158.42.169.186
       netmask   255.255.254.0
       gateway   158.42.168.250
       network   158.42.168.0
       broadcast 158.42.169.255
       slaves eth0 eth1
auto vmbr0
iface vmbr0 inet static
       address   158.42.169.186
       netmask   255.255.254.0 
       gateway   158.42.168.250
       network   158.42.168.0  
       broadcast 158.42.169.255
       bridge_ports bond0
       bridge_stp off    
       bridge_fd 0
I'm using this.
Ethernet controller: Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet (rev 12)
 
Maybe
Code:
auto lo
iface lo inet loopback
auto bond0
iface bond0 inet manual
       slaves eth0 eth1

auto vmbr0
iface vmbr0 inet static
       address   158.42.169.186
       netmask   255.255.254.0 
       gateway   158.42.168.250
       network   158.42.168.0  
       broadcast 158.42.169.255
       bridge_ports bond0
       bridge_stp off    
       bridge_fd 0
 
Changing to this:
Code:
auto bond0
iface bond0 inet manual
       slaves eth0 eth1
I'm having the same problem. Also vm machines can reach my gateway but cannot reach some machines in the same lan.
If I don't use the bond, all works perfectly.

In /etc/modprobe.d/bonding I have:
Code:
install bond0 /sbin/modprobe bonding -o bond0 mode=0 miimon=100 downdelay=200

I'm not an expert on bonding, so maybe is something wrong.
 
After removing ip settings on the bond, the vm's cannot reach the lan and the server cannot see the other servers in the cluster. After some minutes I lost the lan.
After adding again the ip settings it works like before.
I think that I must learn more about bonding...
 
Last edited:
my network configuration is:

Code:
sun02:~# cat /etc/network/interfaces 
# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

iface eth2 inet manual

iface eth3 inet manual

auto bond0
iface bond0 inet manual
    slaves eth0 eth1
    bond_miimon 100
    bond_mode balance-rr

auto bond1
iface bond1 inet manual
    slaves eth2 eth3
    bond_miimon 100
    bond_mode balance-rr

auto vmbr0
iface vmbr0 inet static
    address  192.168.250.2
    netmask  255.255.255.0
    gateway  192.168.250.254
    bridge_ports bond0
    bridge_stp off
    bridge_fd 0

auto vmbr1
iface vmbr1 inet static
    address  192.168.249.20
    netmask  255.255.255.0
    bridge_ports bond1
    bridge_stp off
    bridge_fd 0

and the same with other ip on the second node (2 node in cluster).
on boot and sometimes appears the message
Code:
Nov 25 14:09:48 sun02 kernel: eth1: duplicate address detected!
Nov 25 14:09:48 sun02 kernel: eth0: duplicate address detected!
Nov 25 14:09:49 sun02 kernel: eth0: duplicate address detected!
Nov 25 14:09:49 sun02 kernel: eth1: duplicate address detected!
Nov 25 14:09:50 sun02 kernel: eth1: duplicate address detected!
ifconfig is:
Code:
sun02:~# ifconfig 
bond0     Link encap:Ethernet  HWaddr 00:1e:68:57:4b:a4  
          inet6 addr: fe80::21e:68ff:fe57:4ba4/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:6824 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5349 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1299892 (1.2 MiB)  TX bytes:1108714 (1.0 MiB)

bond1     Link encap:Ethernet  HWaddr 00:1e:68:57:4b:a6  
          inet6 addr: fe80::21e:68ff:fe57:4ba6/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:2189 errors:0 dropped:0 overruns:0 frame:0
          TX packets:32 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1152252 (1.0 MiB)  TX bytes:2052 (2.0 KiB)

eth0      Link encap:Ethernet  HWaddr 00:1e:68:57:4b:a4  
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:3386 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2675 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:647155 (631.9 KiB)  TX bytes:555427 (542.4 KiB)

eth1      Link encap:Ethernet  HWaddr 00:1e:68:57:4b:a4  
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:3438 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2674 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:652737 (637.4 KiB)  TX bytes:553287 (540.3 KiB)

eth2      Link encap:Ethernet  HWaddr 00:1e:68:57:4b:a6  
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:502 errors:0 dropped:0 overruns:0 frame:0
          TX packets:16 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:181842 (177.5 KiB)  TX bytes:1040 (1.0 KiB)

eth3      Link encap:Ethernet  HWaddr 00:1e:68:57:4b:a6  
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:1687 errors:0 dropped:0 overruns:0 frame:0
          TX packets:16 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:970410 (947.6 KiB)  TX bytes:1012 (1012.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:658 errors:0 dropped:0 overruns:0 frame:0
          TX packets:658 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:158751 (155.0 KiB)  TX bytes:158751 (155.0 KiB)

venet0    Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  
          UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

veth102.0 Link encap:Ethernet  HWaddr 00:18:51:e2:3a:18  
          inet6 addr: fe80::218:51ff:fee2:3a18/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1207 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2603 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:240998 (235.3 KiB)  TX bytes:322747 (315.1 KiB)

veth102.1 Link encap:Ethernet  HWaddr 00:19:61:35:fb:c3  
          inet6 addr: fe80::219:61ff:fe35:fbc3/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:69 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:356 (356.0 B)  TX bytes:3490 (3.4 KiB)

veth104.0 Link encap:Ethernet  HWaddr 00:18:51:34:1b:d2  
          inet6 addr: fe80::218:51ff:fe34:1bd2/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:994 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2404 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:222548 (217.3 KiB)  TX bytes:302672 (295.5 KiB)

veth104.1 Link encap:Ethernet  HWaddr 00:18:54:45:fb:c1  
          inet6 addr: fe80::218:54ff:fe45:fbc1/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:58 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:356 (356.0 B)  TX bytes:2932 (2.8 KiB)

veth106.0 Link encap:Ethernet  HWaddr 00:18:51:21:6c:ba  
          inet6 addr: fe80::218:51ff:fe21:6cba/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1972 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3347 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:375674 (366.8 KiB)  TX bytes:441762 (431.4 KiB)

veth106.1 Link encap:Ethernet  HWaddr 00:78:77:4a:c1:cc  
          inet6 addr: fe80::278:77ff:fe4a:c1cc/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:52 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:356 (356.0 B)  TX bytes:2578 (2.5 KiB)

vmbr0     Link encap:Ethernet  HWaddr 00:1e:68:57:4b:a4  
          inet addr:192.168.250.2  Bcast:192.168.250.255  Mask:255.255.255.0
          inet6 addr: fe80::21e:68ff:fe57:4ba4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2750 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1175 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:349125 (340.9 KiB)  TX bytes:210814 (205.8 KiB)

vmbr1     Link encap:Ethernet  HWaddr 00:1e:68:57:4b:a6  
          inet addr:192.168.249.20  Bcast:192.168.249.255  Mask:255.255.255.0
          inet6 addr: fe80::21e:68ff:fe57:4ba6/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:78 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:4008 (3.9 KiB)  TX bytes:180 (180.0 B)

I have check all ip (ipv4) and all mac addresses but no dupes.....

I try a suggestion.... ipv6 problem? :confused: :confused: :confused:


thanks
 
SOLVED.....

The problem was not the server side, but clients. On 3 client I have configured subnet 192.168.222.0, and an alias on 192.168.250.0, and network crash/freeze. On 4th client have only 192.168.250.0 and all work good.
When reconfigure other client all work.

Thanks.
 
Finally I found my problem too. The servers were in a switch. I have changed the switch and it works good, I must have some bad configuration on that switch.
Thankyou for all your help and time.
 
Last edited:
I also have the same problem, I did other tests and with the ping is bad, it loses a lot of packages ... but I noticed that if I disable one of the two network cards in bonding everything worked fine ...
The strange thing is that only with this version of debian (proxmox) this happens to me ... ideas?
 
from experiments made it appears that to use the bridge with bonding must use a bonding mode = 6 (balance-alb)
Experience for yourself I hope my tests have been true
 
Unfortunately, even after several trials so wrong ...
I eventually found that the problem depends on the configuration of the bridge ...
you enter the parameters that work!

Code:
bridge_stp on
bridge_maxwait 0
bridge_maxage 0
bridge_fd 0
bridge_ageing 0

let me know if you go or not
Hello everyone
 
We solved that, after alot of fighting with the bonds, by removing all the bonds.
 
have you tried these extra parameters?

Code:
bridge_stp on
bridge_maxwait 0
bridge_maxage 0
bridge_fd 0
bridge_ageing 0

And it works, at least for us, not so bad!
6 VLANs in 1 bonding (balance-rr), 2 nodes cluster (DRBD).
Nothing done on the stack of switches (H3C) except disabling spanning tree.

Before your params : timeout, ping lost, etc.
Now : stable ping delay, but some DUP!

Live migration OK.

Thank you!

Christophe.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!