CEPH network configuration

towerman

New Member
Jan 22, 2019
20
1
3
Italy
Hi community,

we've a working cluster with this specs, three nodes each with the following configuration:
  • 2x - 1GBe nic for ring0 (switch 1) and ring1 (switch 2)
  • 2x - 10GBe nic (bond active backup) to different switches, with VMs traffic and CEPH traffic on two different VLANs
  • 2x - 10GBe switches (12 ports) without stackable feature
pve01 - Proxmox Virtual Environment.jpg

In this way, in the 10GB bond, one port is not used.
Our switches are not stackable, is it possible to change the bond from active backup to balance-alb in order to achieve redundancy and performance?
Is that a supported configuration?

Thanks!
 
Our switches are not stackable, is it possible to change the bond from active backup to balance-alb in order to achieve redundancy and performance?
It is possible to change it, you will see if the performance improves or not.

Is that a supported configuration?
The setup is not recommended, as: Don't share you storage traffic with any other traffic on the same physical interface, this creates interference. And the different bond modes may introduce higher latency and more complexity.

balance-alb or 6
...
A problematic outcome of using ARP
negotiation for balancing is that each time that an
ARP request is broadcast it uses the hardware address
of the bond. Hence, peers learn the hardware address
of the bond and the balancing of receive traffic
collapses to the current slave.
https://www.kernel.org/doc/Documentation/networking/bonding.txt