LACP two 40 gbit/s eth or 40 gbit/s eth + 56 gbit/s infiniband

Nathan Stratton

Well-Known Member
Dec 28, 2018
44
3
48
49
I am upgrading a 16 node cluster that has 2 NVMe drives and 3 SATA drives used for ceph. My network cards are Mellanox MCX354A-FCBT and have 2 QSFP ports that can be configured as Infiniband or Ethernet. My question is how best should I utilize the two ports. My options are:

1) LACP into VPC spaning two Cisco Nexus 3132Q switches

2) eth0 into Cisco Nexus 3132Q and ib1 into MSX6036F switch

The 2nd option gives me 56 gbit/s Infiniband dedicated for Ceph, but Infiniband RDAM is new with ceph and I am not sure its worth it.
 
Hi,

Proxmox does not support RDMA for Ceph at the moment.
It is working as far as I know.
Infiniband will reduce the latency that increases the performance of ceph.

But as scenario 2 show a mix of eth and Infiniband this could be a problem. [1]
You must use RDMA RoCE on the eth too.
There are approaches for RDMA Bonding but I don't know if it is working with ceph.[2]

1.) https://en.wikipedia.org/wiki/RDMA_over_Converged_Ethernet
2.) https://mymellanox.force.com/mellan.../bonding-considerations-for-rdma-applications
 
Thank you very much for your reply, so it sounds like bonding two 40 gbit/s interfaces and not using RDMA is my best bet.