In the process of setting up a new cluster and I have a few options it seems to configure the networks. The servers for these have a total of 4 NIC interfaces, 2 of them are 10Gb SFP+ bonded in LACP mode, being the link for vmbr1 where I then tag the vlan on the VM interface when connected to vmbr1. The 2 onboard intel NIC's are setup as untagged to ports on the switch, on separate vlans, 99 being management traffic, and 45 for the cluster traffic,
Layout as currently:
Intel onboard:
enp4s0: = 1Gb MGT Labeled 1gb (management interface on management vlan99)
enp3s0: = 1Gb Labeled 2.5Gb (link for corosync traffic on vlan45)
Intel X710:
bond0:
enp1s0f0: = 10Gb
enp1s0f1: = 10Gb
bond0 is essentially a trunk port bonded to the switch, which is a MikroTik CRS317, with all trunk ports as well.
My question is - for the sake of reliability - would it serve the stack more to bond the 2 onboard NIC's and just add vlan tagged interfaces to vlan45 for my corosync traffic, or just leave it as is with the single NIC? Can also setup a fallback link for corosync to a tagged interface in the management vlan (99 in this case) for redundancy as well.
The cluster is not yet formed - so have yet to do testing - since I only want to do this once, thought I would look for input in what works best with my use case. I know the cluster network needs it's own interface in a separate vlan dedicated to that traffic, is there any benefit to bonding the corosync links for added stability, or is just multiple links (management vlan and corosync vlan) good enough?
Open to ideas.
Layout as currently:
Intel onboard:
enp4s0: = 1Gb MGT Labeled 1gb (management interface on management vlan99)
enp3s0: = 1Gb Labeled 2.5Gb (link for corosync traffic on vlan45)
Intel X710:
bond0:
enp1s0f0: = 10Gb
enp1s0f1: = 10Gb
bond0 is essentially a trunk port bonded to the switch, which is a MikroTik CRS317, with all trunk ports as well.
My question is - for the sake of reliability - would it serve the stack more to bond the 2 onboard NIC's and just add vlan tagged interfaces to vlan45 for my corosync traffic, or just leave it as is with the single NIC? Can also setup a fallback link for corosync to a tagged interface in the management vlan (99 in this case) for redundancy as well.
The cluster is not yet formed - so have yet to do testing - since I only want to do this once, thought I would look for input in what works best with my use case. I know the cluster network needs it's own interface in a separate vlan dedicated to that traffic, is there any benefit to bonding the corosync links for added stability, or is just multiple links (management vlan and corosync vlan) good enough?
Open to ideas.