My bridge does not work

hi,

get or try this man

Code:
#deb [URL]http://download.proxmox.com/debian[/URL] lenny pve
deb [URL]http://download.proxmox.com/debian[/URL] lenny pvetest


update and upgrade

regards
 
Hi!

My server is now connected with 2 10gb nic to a switch. The switch is configured to make a trunk on those 2 nics. Here is my configuration:

Code:
 auto bond0
 iface bond0 inet manual
   up ifconfig bond0 0.0.0.0 up
   slaves eth4 eth6
   bond_mode active-backup
   bond_miimon 100
   bond_downdelay 200
   bond_updelay 200
   post-up ifconfig eth4 mtu 9000
   post-up ifconfig eth6 mtu 9000
   post-up ifconfig bond0 mtu 9000
 
 auto bond0.235
 iface bond0.235 inet manual
 
 auto vmbr0
 iface vmbr0 inet static
  address 1.2.2.69
  netmask 255.255.255.0
  gateway 1.2.2.1
  bridge_ports bond0.235
  bridge_stp off
  bridge_fd 0
With this setup, I can ping 1.2.2.1 for a second or two but then the host crashes... After that when I try to reboot, the host always crashes, even if I boot in single user mode...

What is wrong in my config? What can I do to boot on the host without loading the network interfaces?
Thanks
 
Last edited by a moderator:
I reinstall pve this morning. When I create a simple VLAN with the following configuration, everything works (I can connect to the net and update pve). But when I try to connect the bridge vmbr0 to that vlan, the host crashes... :( Help!

Code:
auto bond0
iface bond0 inet manual
  up ifconfig bond0 0.0.0.0 up
  slaves eth4 eth6
  bond_mode active-backup
  bond_miimon 100
  bond_downdelay 200
  bond_updelay 200
  post-up ifconfig eth4 mtu 9000
  post-up ifconfig eth6 mtu 9000
  post-up ifconfig bond0 mtu 9000

auto vlan235
iface vlan235 inet static
 address 12.2.2.69
 netmask 255.255.255.0
 gateway 1.2.2.1
 vlan-raw-device bond0
 post-up ifconfig vlan235 mtu 1500
How can I connect vmbr0 to the vlan235???
 
To make sure that the bonding was not the problem I have removed it from the equation. The system goes
down (kernel panic) with the following configuration:

Code:
auto lo
iface lo inet loopback

auto vmbr0
iface vmbr0 inet static
 address 1.2.2.69
 netmask 255.255.255.0
 gateway 1.2.2.1
 bridge_ports eth4.235
 bridge_stp off
 bridge_fd 0
Is it possible that the way the trunking is done on the switch crashes the kernel? What did I miss in the vlan configuration? I have tested it with the kernel 2.6.32 and 2.6.35

The strange thing is that when I setup the vlan without vmbr0, the host don't crash... I really think that this is related to bridge.
This config works (no bridge):
Code:
auto lo
iface lo inet loopback

auto eth4.235
iface eth4.235 inet static
 address 1.2.2.69
 netmask 255.255.255.0
 gateway 1.2.2.1
Code:
fl-vm01:~# pveversion -v
pve-manager: 1.7-10 (pve-manager/1.7/5323)
running kernel: 2.6.32-4-pve
proxmox-ve-2.6.32: 1.7-28
pve-kernel-2.6.32-4-pve: 2.6.32-28
qemu-server: 1.1-25
pve-firmware: 1.0-9
libpve-storage-perl: 1.0-16
vncterm: 0.9-2
vzctl: 3.0.24-1pve4
vzdump: 1.2-9
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.13.0-2
ksm-control-daemon: 1.0-4
I really don't want to install anything else than proxmox ve for my virtualization needs. It is the best solution. When you have tried it once, you will never want to use VMW anymore! :)
 
Last edited by a moderator:
Try this setup. You only need to set the mtu on the bond interface and this automatically sets the mtu for whatever nics are slaved to it, as per this example.

Also you need to have this file setup properly as well:

Create a file /etc/modprobe.d/bonding.conf containing:
alias bond0 bonding
options bonding mode=active-backup miimon=100 downdelay=200 updelay=200 primary=eth4

Now open /etc/network/interfaces and paste this in

# network interface settings
auto lo
iface lo inet loopback

allow-bond0 eth0
iface eth0 inet manual

allow-bond0 eth1
iface eth1 inet manual

allow-vmbr0 bond0
iface bond0 inet manual
slaves eth0 eth1
bond_miimon 100
bond_mode active-backup
up ifconfig bond0 mtu 9000

allow-vmbr0 bond0.235
iface vlan235 inet manual
vlan-raw-device bond0

auto vmbr0
iface vmbr0 inet static
address 1.2.2.69
netmask 255.255.255.0
gateway 1.2.2.1
bridge_ports bond0.235
bridge_stp off
bridge_fd 0
bridge_maxwait 0
pre-up ifup --allow "$IFACE" bond0
post-down ifdown --allow "$IFACE" bond0


Are you doing vlan trunking by the way on the switch? Is that why you are trying to specify a vlan? If yes - have you got the 8021q module loaded in /etc/modules ? if not then don't specify a vlan at all - and it should work without it. so you would have this setup instead:

/etc/network/interfaces

# network interface settings
auto lo
iface lo inet loopback

allow-bond0 eth0
iface eth0 inet manual

allow-bond0 eth1
iface eth1 inet manual

allow-vmbr0 bond0
iface bond0 inet manual
slaves eth0 eth1
bond_miimon 100
bond_mode active-backup
up ifconfig bond0 mtu 9000

auto vmbr0
iface vmbr0 inet static
address 1.2.2.69
netmask 255.255.255.0
gateway 1.2.2.1
bridge_ports bond0
bridge_stp off
bridge_fd 0
bridge_maxwait 0
pre-up ifup --allow "$IFACE" bond0
post-down ifdown --allow "$IFACE" bond0
 
Last edited:
Thanks mightymouse2045, I really look forward to Monday to test it. Yes the switch does trunking and the module 8021q is loaded.
 
Thanks mightymouse2045, I really look forward to Monday to test it. Yes the switch does trunking and the module 8021q is loaded.

Yeah no probs.

I am surprised there is nothing out there for this config already. To me it seems like a pretty common place thing ie multiple nics - bond - vlans - bridge

I can see lots of examples for individual nics - vlans - bridge but nothing with the bond in place except for XEN which doesn't help because that is setup under /etc/sysconfig/network-scripts where as proxmox is all in the one interfaces file.

Anyways from what i can see the example i gave you is correct and should work :)
 
It does not work. :(

Capture.png

I'm really sad about that :( I use 2 OCE10102-NX emulex 10 gig.
 
Last edited by a moderator:
I guess a bad driver. which module is loaded for this card? (lspci -vv)
 
Hi...

The driver used is be2net. Here is a partial listing for the lspci -vv command:

Code:
06:00.0 Ethernet controller: Emulex Corporation OneConnect 10Gb NIC (rev 02) 
    Subsystem: Emulex Corporation Device e622 
    Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ 
    Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- 
    Latency: 0, Cache Line Size: 64 bytes 
    Interrupt: pin A routed to IRQ 38 
    Region 1: Memory at df378000 (32-bit, non-prefetchable) [size=16K] 
    Region 2: Memory at df380000 (64-bit, non-prefetchable) [size=128K] 
    Region 4: Memory at df3a0000 (64-bit, non-prefetchable) [size=128K] 
    Expansion ROM at df200000 [disabled] [size=512K] 
    Capabilities: [40] Power Management version 3 
        Flags: PMEClk- DSI+ D1- D2- AuxCurrent=375mA PME(D0-,D1-,D2-,D3hot+,D3cold+) 
        Status: D0 PME-Enable- DSel=0 DScale=0 PME- 
    Capabilities: [48] MSI-X: Enable+ Mask- TabSize=32 
        Vector table: BAR=1 offset=00002000 
        PBA: BAR=1 offset=00003000 
    Capabilities: [c0] Express (v2) Endpoint, MSI 00 
        DevCap:    MaxPayload 512 bytes, PhantFunc 0, Latency L0s <1us, L1 <16us 
            ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ 
        DevCtl:    Report errors: Correctable- Non-Fatal+ Fatal+ Unsupported+ 
            RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop- FLReset- 
            MaxPayload 256 bytes, MaxReadReq 512 bytes 
        DevSta:    CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr+ TransPend- 
        LnkCap:    Port #0, Speed 5GT/s, Width x8, ASPM L0s, Latency L0 <1us, L1 <16us 
            ClockPM- Suprise- LLActRep- BwNot- 
        LnkCtl:    ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk+ 
            ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- 
        LnkSta:    Speed 5GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- 
    Capabilities: [100] Advanced Error Reporting <?> 
    Capabilities: [194] Device Serial Number 32-b5-a0-fe-ff-c9-00-00 
    Kernel driver in use: be2net 
    Kernel modules: be2net
here is the full output (because it exceed to post limit...)
http://pastebin.com/gj76gzL1

The hardware is a DELL R710, which includes an onboard four ports broadcom nic that is not used. I have 2 emulex oneconnect card with 2 ports each which are connected to a nexus switch...

Is it the right driver?
 
Last edited by a moderator:
post logs in the forum (to keep them archived)
 
Youppi! My bridge works, (almost, see the end of the post)... My problem was an issue with the be2net driver. http://forum.proxmox.com/threads/5275-be2net-driver?p=30345#post30345 With the kernel 2.6.36.2, my system is now stable!

Here is my /etc/network/interfaces:

Code:
auto lo
iface lo inet loopback

auto bond0
iface bond0 inet manual
  up ifconfig bond0 0.0.0.0 up
  slaves eth4 eth6
  bond_mode active-backup
  bond_miimon 100
  bond_downdelay 200
  bond_updelay 200
  post-up ifconfig bond0 mtu 9000

auto bond0.111
iface bond0.111 inet manual
 up ifconfig bond0.111 0.0.0.0 up
 vlan-raw-device bond0
 post-up ifconfig bond0.111 mtu 1500

auto bond0.222
iface bond0.222 inet manual
  up ifconfig bond0.222 0.0.0.0 up
  vlan-raw-device bond0
  post-up ifconfig bond0.222 mtu 9000

auto vmbr0
iface vmbr0 inet static
 address 1.1.111.69
 netmask 255.255.255.0
 gateway 1.1.111.1
 bridge_ports bond0.111
 bridge_stp off
 bridge_fd 0

auto vmbr1
iface vmbr1 inet static
  address 10.10.10.20
  netmask 255.255.255.0
  bridge_ports bond0.222
  bridge_stp off
  bridge_fd 0
Teaming two emulex 10Gb with vlan... :)

Now a new problem arise: I have created two KVM bridged over vmbr0.
- The VMs cannot ping the gateway 1.1.111.1
- The VMs can ping the host 1.1.111.69
- The host can ping the VMs
- If I create a ssh connection from the host to the VM, after a while, it connects.From then, the VMs can ping the gateway and go on the net. Strange!

Here is the command "route -n" on both host and vm:
Code:
host:~# route -n
Table de routage IP du noyau
Destination     Passerelle      Genmask         Indic Metric Ref    Use Iface
10.10.10.0      0.0.0.0         255.255.255.0   U     0      0        0 vmbr1
1.1.111.0       0.0.0.0         255.255.255.0   U     0      0        0 vmbr0
0.0.0.0         1.1.111.1       0.0.0.0         UG    0      0        0 vmbr0


ubuntu1004-vm01:~# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
1.1.111.0       0.0.0.0         255.255.255.0   U     0      0        0 eth0
0.0.0.0         1.1.111.1       0.0.0.0         UG    100    0        0 eth0
 
Youppi! My bridge works, (almost, see the end of the post)... My problem was an issue with the be2net driver. http://forum.proxmox.com/threads/5275-be2net-driver?p=30345#post30345 With the kernel 2.6.36.2, my system is now stable!

Here is my /etc/network/interfaces:

Code:
auto lo
iface lo inet loopback

auto bond0
iface bond0 inet manual
  up ifconfig bond0 0.0.0.0 up
  slaves eth4 eth6
  bond_mode active-backup
  bond_miimon 100
  bond_downdelay 200
  bond_updelay 200
  post-up ifconfig bond0 mtu 9000

auto bond0.111
iface bond0.111 inet manual
 up ifconfig bond0.111 0.0.0.0 up
 vlan-raw-device bond0
 post-up ifconfig bond0.111 mtu 1500

auto bond0.222
iface bond0.222 inet manual
  up ifconfig bond0.222 0.0.0.0 up
  vlan-raw-device bond0
  post-up ifconfig bond0.222 mtu 9000

auto vmbr0
iface vmbr0 inet static
 address 1.1.111.69
 netmask 255.255.255.0
 gateway 1.1.111.1
 bridge_ports bond0.111
 bridge_stp off
 bridge_fd 0

auto vmbr1
iface vmbr1 inet static
  address 10.10.10.20
  netmask 255.255.255.0
  bridge_ports bond0.222
  bridge_stp off
  bridge_fd 0
Teaming two emulex 10Gb with vlan... :)

Now a new problem arise: I have created two KVM bridged over vmbr0.
- The VMs cannot ping the gateway 1.1.111.1
- The VMs can ping the host 1.1.111.69
- The host can ping the VMs
- If I create a ssh connection from the host to the VM, after a while, it connects.From then, the VMs can ping the gateway and go on the net. Strange!

Here is the command "route -n" on both host and vm:
Code:
host:~# route -n
Table de routage IP du noyau
Destination     Passerelle      Genmask         Indic Metric Ref    Use Iface
10.10.10.0      0.0.0.0         255.255.255.0   U     0      0        0 vmbr1
1.1.111.0       0.0.0.0         255.255.255.0   U     0      0        0 vmbr0
0.0.0.0         1.1.111.1       0.0.0.0         UG    0      0        0 vmbr0


ubuntu1004-vm01:~# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
1.1.111.0       0.0.0.0         255.255.255.0   U     0      0        0 eth0
0.0.0.0         1.1.111.1       0.0.0.0         UG    100    0        0 eth0
Hi,
you are sure that's a good idea to mix mtu 9000 and mtu 1500 on the same bond? Maybe it's works, but it's looks a little bit curios.

Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!