4 GbE card + 1 Mobo Eth -> 1 bond0 + 1 management lan?

C-Fu

Member
Apr 13, 2018
8
0
6
42
How would I go about doing this? I'm new to this btw.

My usecase is this: Have multiple VMs use bond0 for fault tolerance and load balancing (if 1 VM saturates 1 GbE line, the other VMs will automagically use the other bond0 slaves. And use the onboard eth exclusively for management.

Question 1: Would this setup be ideal? Or is there a better way? Like dedicated GbE port per VM?
Question 2: Do I need to add bond0 to vmbr0's Ports/Slaves? or Create vmbr1 with bond0 as the slave? A bit confused as to how bond0 would get the IP.

This is my current setup:
1618340742614.png
ens2f* are the 4 port intel GbE card, enp6s0 is the onboard (Management) port. All LAN ports go to 1 Switch that goes to the router.

Thanks in advance!
 

Attachments

  • 1618340725672.png
    1618340725672.png
    32.4 KB · Views: 0
Question 1: Would this setup be ideal? Or is there a better way? Like dedicated GbE port per VM?
If you only have four VMs you can also dedicate one to each VM, but in reality a bond is the way to go, since it also gives you fault toierance, and when does a single VM constantly saturate a GBit link?

Question 2: Do I need to add bond0 to vmbr0's Ports/Slaves? or Create vmbr1 with bond0 as the slave? A bit confused as to how bond0 would get the IP.
If you want to use enp6s0 solely for management, then you can assign the host's main IP address to that interface and it will not be used for any VM.
The four-NIC bond then becomes the bridged-port for vmbr0, but no IP address for the host here.

The VMs that use vmbr0 as their NIC get their address from your dhcp server, as usual.

Keep in mind though, that balance-rr is not the optimal way of bonding NICs, since it oftentimes makes retransmission of packets necessary, that arrived in the wrong order. With an LACP (802.3ad) bond you have the same fault tolerance and most of the times better throughput. The only problem that can occur is that several machines get assigned the same slave leaving the other slaves unused. You can tweak that through the hashing algorithm on Linux side and on the switch side, though.
 
  • Like
Reactions: C-Fu
If you want to use enp6s0 solely for management, then you can assign the host's main IP address to that interface and it will not be used for any VM.
Yes this is what I want, but I have no idea what to do/change.

Would this be it then?
1618384469416.png
I can't add gateway to enp6s0 Management because *I THINK* gateway should be in vmbr0, so all VMs will get DHCP IPs. But Management won't get the 0.254 IP without gateway right?

With an LACP (802.3ad) bond you have the same fault tolerance and most of the times better throughput.
As you might've guessed, I have no idea what I'm doing lol. I don't have a switch with LACP, so I suppose gotta make do with balance-rr.

Thanks in advance!
 
The gateway is relevant for the host's connections, so you should put it in enp6s0. The bridge will well work without it.
Keep in mind that you have to setup a LAG in your switch for balance-rr as well.
With a config change in the switch you could try balance-alb or balance-tlb.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!