Single Cluster Link to LAG Cluster Link

Haider Jarral

Well-Known Member
Aug 18, 2018
121
5
58
37
Hello experts,

I am using 4 box cluster running 5.x, when I initially set it up I used single link for cluster communication. Now I intend to change that to a LAG to get more bandwidth for cluster communication.

From what I understand, all I need is to change single link to a LAG, use same IP and reboot the server.

Is that it or should I cater for some other things ?
 
I am sorry I must have used the wrong term, When I originally designed this cluster I just used 1G network links to setup cluster. Now I have enough VMs and I anticipate the cluster communication needs to have more bandwidth since my failover time for VMs is ~3 minutes.

My idea is if I change that 1G link to 2G LAG I will have better failover time and faster convergence. Please correct me if I am wrong. I read in other other higher the cluster bandwidth faster the VMs recovery in case of node failure.
 
Ok you need more seed on the migration network.

If you use LACP you will eventual benefit. Because the problem is LACP algo works on src and dst on layer 2/3 .
so if you migrate from node a to node b you will always take the same link.
if you migrate also to node c the likelihood is good that you use the other link.
 
Thank you for the confirmation.

So the procedure to do that is creating LAG via gui, use same IP and just reboot. That should do it right, nothing extra ?
 
So the procedure to do that is creating LAG via gui, use same IP
Yes if you have a decided nic for the cluster now.
It would easier to tell if you send the config. please mask de IPs. they are not required.

Also should master node be rebooted at end or in the start.
PVE use a multi-master approach.
There is no decided master, you can start where you want.
 
Here is my current network config

auto lo
iface lo inet loopback
auto eno3
iface eno3 inet manual
iface eno1 inet manual
iface eno2 inet manual
auto eno4
iface eno4 inet manual
auto vmbr0
iface vmbr0 inet static
address 184.X.X.156
netmask 255.X.X.248
gateway 184.X.X.153
bridge-ports eno4
bridge-stp off
bridge-fd 0
auto vmbr1
iface vmbr1 inet static
address 192.168.1.116
netmask 255.255.255.0
bridge-ports eno3
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
pre-up ip link set eno3 mtu 9000
auto vmbr2
iface vmbr2 inet manual
bridge-ports eno1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
auto vmbr3
iface vmbr3 inet manual
bridge-ports eno2
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
post-up ip route add 10.10.10.0/24 via 192.168.1.1


I intend to leave one on public IP for management.
I want to make LAG of two using 192.168.1.1/24 for cluster
I want to use 1 dedicated for VMs to get a local IP on a 192.168.10.1/24 subnet.
 
Can you tell me what bridge is for what purpose?

And with which nic do you want to make the bond?
 
vmbr0 is for management

vmbr1 is used for cluster comm

vmbr2 is used to give private IPs to VM

for LAG, I'll combine vmbr1 and vmbr3
 
Ok you do not need bridges for an ip address.

if you use only vmbr2 for VM i would change the setting like this.

I assume you like to use 802.3ad (LACP)

PHP:
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual

iface eno2 inet manual

iface eno3 inet manual

auto eno4
iface eno4 inet static
          address 184.X.X.156
          netmask 255.X.X.248
          gateway 184.X.X.153

# storage network
auto bond0
iface bond0 inet static
          address 192.168.1.116
          netmask 255.255.255.0
          bond-slaves eno2 eno3
          bond-miimon 100
          bond-mode 802.3ad
          bond-xmit-hash-policy layer2+3
          mtu 9000

auto vmbr2
iface vmbr2 inet manual
          bridge-ports eno1
          bridge-stp off
          bridge-fd 0
          bridge-vlan-aware yes
          bridge-vids 2-4094

If you use this config ensure it correspondence with your settings.
No guarantee this will work in your env.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!