Simpler Bond + bridge, with 3 nics

sgtfoo

Member
Apr 5, 2012
36
0
6
Everyone's asking about multiple node bond/bridge/vlan stuff, but I will just have 3 NICS..

1 NIC on the MOBO (eht0) and 2 on a dual-port PCI-X card (eth1, eth2).

What I'd like is all 3 to connect to my single router/switch (neither can do fancy managed switching, so no 802ad stuff).
I'm under the impression that we must have a vmbr0 to connect the console and have VMs connect to the network.

Could I bond all 3 and then have them run thru the vmbr0?
If not,
Could I bond the 2 NICS on PCI-X and have them be the VM bridge and have the eth0 be the console-use-only NIC?

Could this all be done in the console GUI, or do I need to make a manual interfaces file?
 
I'm not a networking wizard and i have the fancy equipment your talking about but this should be doable depending on the mode used.

Section below taken from "http://www.linuxhorizon.ro/bonding.html". Have a look at the last two options and note the "does not require any special switch support" part. I'm not at my console and cant check, but i know the network bond options in the GUI have those. Just add the ports you want (ex: eth0.. eth1... etc) to a bonding group with one of those modes, and then attach the bond# to vmbr0. It's all doable in the interface. Worst thing that could happen is you have to go into the command line and remove the bond# from vmbr0 to get the web interface back.

Hope this helps.


mode=0 (balance-rr)
Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.

mode=1 (active-backup)
Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond's MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance. The primary option affects the behavior of this mode.

mode=2 (balance-xor)
XOR policy: Transmit based on [(source MAC address XOR'd with destination MAC address) modulo slave count]. This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.

mode=3 (broadcast)
Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.

mode=4 (802.3ad)
IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification.

mode=5 (balance-tlb)
Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.

mode=6 (balance-alb)
Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.
 
Both things you mentioned (3x bond or 2x bond with a new vmbr + 1 standalone nic for vmbr0) is possible to do straight from Proxmox webgui, server just needs a restart after editing unless you edit in manually from cli.
By default Proxmox uses mode=4 (802.3ad).

Creating bonds with NIC's that use different drivers (e.g. mixing maybe Intel with Realtek) can act weird, so I would not do it. Even mixing different drivers from same provider can act up, like mixing igb with e1000.

Also a port-channel with 3 ports doesn't load-balance properly over all 3 nics(at least in the Cisco world), see: http://www.cisco.com/en/US/tech/tk389/tk213/technologies_tech_note09186a0080094714.shtml
Number of Ports in the EtherChannel
Load Balancing
81:1:1:1:1:1:1:1
72:1:1:1:1:1:1
62:2:1:1:1:1
52:2:2:1:1
4
2:2:2:2
33:3:2
2
4:4
 
I guess I'm asking specifically which has to be done first and which must be slaved to which??
I'm guessing..
1) build bond0 with eth1 and eth2
2) build vmbr0 with bond0 as the slave
3) leave eth0 active and auto as the console interface

I need my host to exist anywhere between 192.168.1.2-9 static and have my VMs within starting between .10-.50 static.
 
I don't know really know what you mean with console interface? cli? cli is not bound to a specific network interface...
Management is done on the ip configured for vmbr0.

For your networking questions you could do like this.

Create a bond0 interface by using the interfaces on your dual port nic, most likely eth1 & eth2.
Now if you plan to have the rest of your VM's on vmbr0 you could bridge vmbr0 with the bond0(instead of the deafult eth0) interface.
As you don't have any smart switch I think you need to use the balance-rr mode, the loadbalancing is then done serverside and no smart switch is required.

You can also create vmbr1 and use this with bond0 instead to separate your vm's from your management network(as you are not using any VLANS). If you go this route you can leave eth0 bridged with vmbr0 (default).

If you just take a look in the webgui its pretty self-explanatory.
 
thanks for the help. Greatly appreciated. I haven't yet installed Prox on my new host so I'm trying to know what I'm going to need to do to get my home server back online ASAP. I've only tried running Prox within a Virtualbox so far.
 
I guess I'm asking specifically which has to be done first and which must be slaved to which??
I'm guessing..
1) build bond0 with eth1 and eth2
2) build vmbr0 with bond0 as the slave
3) leave eth0 active and auto as the console interface

Yes, bond your interfaces and add the bond (bond0) to vmbr0. You can add all the interfaces if you wish. You will remove the configuration for eth0 if you include it in the bond. Whatever IP address you assign to vmbr0 will be where you can access your web interface and VM's will pass.

So....
1) Build bond0... dont give it an IP address, just add the interfaces you want bonded and set the mode. Your interfaces themselves will also be left unconfigured.
2) Add bond0 to vmbr0 and give vmbr0 the IP address, subnet mask and other settings you want.

I need my host to exist anywhere between 192.168.1.2-9 static and have my VMs within starting between .10-.50 static.
In a bonding setup, your system will show up with 1 IP. So even if you have 3 interfaces in the bond, it consumes and uses only one IP address. So give your vmbr0 the address you want 192.168.1.2-9. The VM's themselves dont care about the bond or anything else, they just see the network and you give them whatever addresses you want.
 
OK so I had some issues trying to get this to work. I made a few attempts with settings, while always keeping a working backup of the interfaces file.

Noteworthy:
- Using consumer-grade networking gear.
- using a dual-NIC card on my proxmox server as well as the mobo NIC
- all 3 cables from the server go to my router, which also splays to 2 networks switches.
- Everything is gigabit

Here's my interfaces file now..

Code:
# network interface settings
auto lo
iface lo inet loopback
iface eth0 inet manual
iface eth1 inet manual
iface eth2 inet manual

auto bond0
iface bond0 inet manual
slaves eth1 eth2
bond_miimon 100
bond_mode balance alb

auto vmbr0
iface vmbr0 inet static
address xxx.xxx.1.4
netmask xxx.xxx.xxx.0
gateway xxx.xxx.1.1
bridge_ports eth0
bridge_stp off
bridge_fd 0

auto vmbr1
iface vmbr1 inet static
address xxx.xxx.1.5
netmask xxx.xxx.xxx.0
bridge_ports bond0
bridge_stp off
bridge_fd 0

prox-bonded-net.PNG


The key was making sure the bridges and the bond were all set to "autostart" in the web-GUI.

The problem now is that the VMs within vmbr1 can't ping anything.

What am I missing?
 
Well, some things to check...

Are your VM's getting a proper IP address, subnet mask, gateway and DNS settings?
Can you ping other VM's running on vmbr1?
Can you ping other computers on your internal network?

Also...
Have you tried other bonding modes on bond0?

You may also try removing the IP address and subnet mask for vmbr1, as it shouldn't really need an address. I could always be wrong on that but give it a try, you can always give it an address again later.
 
OK, really delayed reply, but I'm reviving this thread with success.

I took another look at the changes to do and the following setup is working..

Code:
/etc/network# cat interfaces # network interface settings
auto lo
iface lo inet loopback


iface eth0 inet manual


iface eth1 inet manual


iface eth2 inet manual

#initiate bonded hardware ports
auto bond0
iface bond0 inet manual
    slaves eth1, eth2
    bond_miimon 100
    bond_mode balance-alb

#bridge eth0 to vmbr0
auto vmbr0
iface vmbr0 inet static
    address  x.x.1.4
    netmask  255.255.255.0
    gateway  x.x.1.1
    bridge_ports eth0
    bridge_stp off
    bridge_fd 0

$bridge bond0 (eth1&2) to vmbr1
auto vmbr1
iface vmbr1 inet static
    address  x.x.1.6
    netmask  255.255.255.0
    bridge_ports bond0
    bridge_stp off
    bridge_fd 0

So as it turns out the IP info needs to be on the bridges, and not the bonds.
+3 networking for me.
 
OK, really delayed reply, but I'm reviving this thread with success.

I took another look at the changes to do and the following setup is working..

Code:
/etc/network# cat interfaces # network interface settings
auto lo
iface lo inet loopback


iface eth0 inet manual


iface eth1 inet manual


iface eth2 inet manual

#initiate bonded hardware ports
auto bond0
iface bond0 inet manual
    slaves eth1, eth2
    bond_miimon 100
    bond_mode balance-alb

#bridge eth0 to vmbr0
auto vmbr0
iface vmbr0 inet static
    address  x.x.1.4
    netmask  255.255.255.0
    gateway  x.x.1.1
    bridge_ports eth0
    bridge_stp off
    bridge_fd 0

$bridge bond0 (eth1&2) to vmbr1
auto vmbr1
iface vmbr1 inet static
    address  x.x.1.6
    netmask  255.255.255.0
    bridge_ports bond0
    bridge_stp off
    bridge_fd 0

So as it turns out the IP info needs to be on the bridges, and not the bonds.
+3 networking for me.

Hi ,
I have done the same configuration in one of pve-node. All interfaces and bond are showing in UP state but still i can not get the ping response through the bonding interface. I am getting ping response through physical interface. Can you please help me out in this regard?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!