Cluster quorum fails after 802.3ad NIC bonding

B

bnapalm

Guest
Hello!

I am trying to set up NIC bonding on two servers A and B in a cluster. Both hosts were already in a cluster and were working fine. After creating the bond0 interface and re-configuring networks/reboot, the cluster quorum keeps timing out. All other network traffic (both VE management and VM connections) seem to be working fine.
I tested multicast via ssmping and it seems that if I run asmping on host B (with A as daemon), I receive back both unicast and multicast packets. However, when running asmping on host A (with host B as daemon), I only receive back unicast packets and sometimes I receive no packets at all. I hadn't tested multicast before making the network changes, but I suppose it was working fine if the cluster was working fine. The only things I changed was create a bond0 iface and use the new iface to tag VLANs, for example, changed "bridge_ports eth0.1234" to "bridge_ports bond0.1234".

Here is my network config on host A:

Code:
# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual

auto eth1
iface eth1 inet static
        address  10.10.5.187
        netmask  255.255.255.0

auto eth2
iface eth2 inet static
        address  10.10.6.187
        netmask  255.255.255.0

iface eth3 inet manual

auto bond0
iface bond0 inet manual
        slaves eth0 eth3
        bond_miimon 100
        bond_mode 802.3ad

auto vmbr0
iface vmbr0 inet static
        address  10.10.10.187
        netmask  255.255.255.0
        gateway  10.10.10.1
        bridge_ports bond0.1742
        bridge_stp off
        bridge_fd 0

auto vmbr10
iface vmbr10 inet manual
        bridge_ports bond0.10
        bridge_stp off
        bridge_fd 0

auto vmbr1696
iface vmbr1696 inet manual
        bridge_ports bond0.1696
        bridge_stp off
        bridge_fd 0

And for host B:
Code:
# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

iface eth2 inet manual

iface eth3 inet manual

auto bond0
iface bond0 inet manual
        slaves eth0 eth3
        bond_miimon 100
        bond_mode 802.3ad

auto vmbr0
iface vmbr0 inet static
        address  10.10.10.185
        netmask  255.255.255.0
        gateway  10.10.10.1
        bridge_ports bond0.1742
        bridge_stp off
        bridge_fd 0

auto vmbr10
iface vmbr10 inet manual
        bridge_ports bond0.10
        bridge_stp off
        bridge_fd 0

auto vmbr1696
iface vmbr1696 inet manual
        bridge_ports bond0.1696
        bridge_stp off
        bridge_fd 0

Please disregard the interfaces eth1 and eth2 on host A, since these are for shared storage which is not configured yet (only the interfaces are configured on host A).

I understand that this might be a problem on the linked switches, but I am sure 802.3ad is configured and network admin says that the ports are configured identically (meaning that any networking activity should either work on both nodes or not work on both nodes). Maybe someone can suggest something to pinpoint the problem either on the switch or host(s)?
I would really appreciate any suggestions with this problem.
Thank you in advance!
 
I managed to solve my problem. Since we have a mixed server setup of both ProxMox and VMware ESXi hosts, and VMware still doesn't support LACP, we switched off LACP on the switch ports and used 802.3ad without LACP. This worked fine for ESXi and I thought it should also work for ProxMox aswell. Turns out I was wrong, since the Multicast packets had problems (as described previously). Turning on LACP on the switch fixed this.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!