Native bond vs. vmbr

Tipenso

Member
May 12, 2022
16
4
8
Hi all.

Can anyone tell me differences/pro/cons between configuring a “native” bond (with ip address configured directly on the bond interface), or inside a vmbr (with ip address configured on the vmbr interface)?

Thank you.
 
A VmbrN i a virtual Switch and a BondM is only ONE interface!
So i see the logic N x Ethernet-Device --> BondN-Interface --> VmbrX-Interface as virtual Switch, witch can act to my IPv4 addresses und the debian kernel can Handle the destination.
 
Usually, you want to address the bridge so the interface can be shared among VM instances. Addressing the bond works for management interface for example when bonding NICs. FYI
 
A VmbrN i a virtual Switch and a BondM is only ONE interface!
So i see the logic N x Ethernet-Device --> BondN-Interface --> VmbrX-Interface as virtual Switch, witch can act to my IPv4 addresses und the debian kernel can Handle the destination.
Of course, I know...
To better explain with an example what I would to say.
I want to create an LACP bond0 interface with physical interfaces eth0 + eth1, and give it an ip address.
So I can accomplish it in 2 ways:
  1. I create bond0 with slaves eth0+eth1 and I give it ip address
  2. I create bond0 with slaves eth0+eth1, then I create a vmbr0 interface with slave bond0 and I give it ip address
I would use solution 1, but I see that many prefer 2, and I don't understand the difference, pros and cons.
 
Usually, you want to address the bridge so the interface can be shared among VM instances. Addressing the bond works for management interface for example when bonding NICs. FYI
You are right, I forgot to write that I meant to refer to interfaces NOT USED as VM networks.
 
I don't understand the difference, pros and cons.

If you just need a bonded interface, for say the managment interface, you would more than likely want to address the bond. If you want to send VM traffic over the same interface, (as in sharing the interface) use an addressed bridge. Really just depends on what your goal is. In my case, I usually just tag all the traffic, and bond the trunk interfaces so I can keep all the traffic segmented. FYI
 
  • Like
Reactions: news
Okay, it's as I assumed. I notice that even in the unofficial Proxmox guides, there is the same habit that I have seen with VMware, of using virtual switches even when they are totally useless. Which still adds a layer and consequent additional latency.
Thank you all.