Setting network bond for VMs only

zecas

Member
Dec 11, 2019
51
5
13
Hi,

I'm testing some network settings on a proxmox server (pve 7.3-3) and I'm having trouble setting up a network bond to work.

On this server, I have a network card with 4 ports: eno1, eno2, eno3 and eno4

I added another network card that would be used for the management console only: enp65s0

On proxmox install, I set the ip address to be 192.168.1.71, using the enp65s0 interface. After install, the proxmox management console was accessible on https://192.168.1.71:8006 as expected.

Now my intentions were to bond the 4 eno? ports in balance-rr mode, set a new linux bridge over that bond, and use it for VM network cards.

So I would end with a card for management network, and the other card with all ports bonded, to be used by the VMs.

I ended up with the following config on /etc/network/interfaces:

Code:
auto lo
iface lo inet loopback

iface enp65s0 inet manual

auto eno1
iface eno1 inet manual

auto eno2
iface eno2 inet manual

auto eno3
iface eno3 inet manual

auto eno4
iface eno4 inet manual

auto bond0
iface bond0 inet manual
        bond-slaves eno1 eno2 eno3 eno4
        bond-miimon 100
        bond-mode balance-rr

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.71/24
        gateway 192.168.1.254
        bridge-ports enp65s0
        bridge-stp off
        bridge-fd 0

auto vmbr1
iface vmbr1 inet manual
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
#vm network

At the moment, console is accessible from vmbr0 on url https://192.168.1.71:8006 as I stated above.

I have a VM with static IP 192.168.1.100, setup with network model Intel E1000 which is assigned to vmbr0 (no VLAN and property firewall=1) and it's working correctly, it can reach and be reached by other machines.

But the minute I change the network to vmbr1, it just stops reaching or being reached by other machines.

I also tried to add CIDR 192.168.1.72/24 on vmbr1:

Code:
...

auto vmbr1
iface vmbr1 inet static
        address 192.168.1.72/24
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
#vm network

With the following results:
  1. Other machines in the network can reach 192.168.1.71 and 192.168.1.72;
  2. Still no way to make network working on the VM when changed to vmbr1;
  3. The management console is also reachable through https://192.168.1.72:8006, which was not my intention.

The VM is defined with network model "Intel E1000", should this be the problem?

I'm getting out of ideas to solve this situation, I've been looking several tutorials online in search for any tip, but nothing so far.

Any idea what can be wrong?

Thank you.
 
A few more tests ahead, I tried to change the bond mode from balance-rr to balance-alb, and ended up with the following config on /etc/network/interfaces:

Code:
auto lo
iface lo inet loopback

iface enp65s0 inet manual

auto eno1
iface eno1 inet manual

auto eno2
iface eno2 inet manual

auto eno3
iface eno3 inet manual

auto eno4
iface eno4 inet manual

auto bond0
iface bond0 inet manual
        bond-slaves eno1 eno2 eno3 eno4
        bond-miimon 100
        bond-mode balance-alb

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.71/24
        gateway 192.168.1.254
        bridge-ports enp65s0
        bridge-stp off
        bridge-fd 0

auto vmbr1
iface vmbr1 inet manual
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
#vm network

With this setup, I change the test VM network to vmbr1 and finally was able to communicate with other machines.

My initial idea was to use that balance-rr mode, so be able to use all ethernet ports equally and have a fault tolerant network (from individual ports perspective, as they are still on the same network card).

Moving to balance-alb made it work, but I still have no knowledge on whether this mode is a better mode.

From proxmox Network Configuration documentation:

Code:
Round-robin (balance-rr):
Transmit network packets in sequential order from the first available network interface (NIC) slave through the last. This mode provides load balancing and fault tolerance.

Adaptive transmit load balancing (balance-tlb):
Linux bonding driver mode that does not require any special network-switch support. The outgoing network packet traffic is distributed according to the current load (computed relative to the speed) on each network interface slave. Incoming traffic is received by one currently designated slave network interface. If this receiving slave fails, another slave takes over the MAC address of the failed receiving slave.

Adaptive load balancing (balance-alb):
Includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special network switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the NIC slaves in the single logical bonded interface such that different network-peers use different MAC addresses for their network packet traffic.

Seems that balance-alb sends traffic distributing the load across bond ports, according to their usage, so all ports will be used more or less evenly. For incoming traffic, instead of receiving all traffic on a single port, it somehow rewrites the hardware addresses on the packets, so that it is able to distribute across the bond ports. Would it be less performant than balance-rr?

Still have no clue why balance-rr didn't work, or what setting was I missing. Can anyone point out what was I doing wrong? Changing only the bond mode made it work, which is strange...

Thank you.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!