Need help bonding (my two NICs)

  • Thread starter TheCrwalingKingSnake
  • Start date
T

TheCrwalingKingSnake

Guest
Hello all,

I just heard about ProxMox and got it up and running on a pretty nice machine here. However, I would like to use the two NICs I have on the server. I have a fresh new install of ProxMox v2.2-24 / 7f9cfa4c

Ill post a screenshot of my Network tab if it helps. I have the 2nd NIC (eth1) plugged into the network. I have tested that port/cable with a laptop and it connected to the network fine. However, i am wondering if I need to have it auto-start, and should it show as active? I tried following the video for bonding ( http://pve.proxmox.com/wiki/Bond_configuration_(Video) ), but I changed the bond to balance-rr instead of active-backup. That resulted in me not being able to connect to ProxMox remotely and I just did a reinstall of PM.

What I would like do is have the two NICs be bonded together for a faster connection. Sorry if this gets asked a million times, but is there a page I can see what the options mean for bonding (balance-rr, active-backup, balance-xor)? Is bonding even what I want, or is it a bridge?

Any help/suggestions would be appreciated!

Thanks,
-Mark
 

Attachments

  • NICs.png
    NICs.png
    28.5 KB · Views: 21
Update:

I tried setting up just like in the video (just using active-backup) and it worked, but I would like the balancer (still trying to figure out the difference between rr and XOR).
 
I am the same and guess it must be ALB and something to do with the way ALB uses ARP and rewrites the source address.

Haven't a clue how to fix it though. It would be good to know what modes work as I am trying to demonstrate Proxmox and get a customer to start using it.

I love the idea of open-source so that you can try before you buy. It saves a lot of wasted time and expenditure, so I could really do with a working scenario for proxmox and a bonded bridge would seem a common request to me?

http://forum.proxmox.com/threads/12852-Initial-configuration-Bonding-amp-Bridging

[h=2]Descriptions of bonding modes[/h]Mode 0balance-rr

Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.
Mode 1active-backup

Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond's MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance. The primary option affects the behavior of this mode.
Mode 2balance-xor

XOR policy: Transmit based on [(source MAC address XOR'd with destination MAC address) modulo slave count]. This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.
Mode 3broadcast

Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.
Mode 4802.3ad

IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification.



  • [*=left]Prerequisites:

    1. [*=left]Ethtool support in the base drivers for retrieving the speed and duplex of each slave.
      [*=left]A switch that supports IEEE 802.3ad Dynamic link aggregation. Most switches will require some type of configuration to enable 802.3ad mode.
    Mode 5balance-tlb
Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.



  • [*=left]Prerequisites:


    • [*=left]Ethtool support in the base drivers for retrieving the speed of each slave.
    Mode 6balance-alb
Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.
 
You should use 802.3ad but you need also configure this on the switch.

/etc/network/interfaces

Code:
# 802.3adauto bond0
iface bond0 inet manual
	slaves eth0 eth1
	bond_miimon 100
	bond_mode 4
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!