I am upgrading a 16 node cluster that has 2 NVMe drives and 3 SATA drives used for ceph. My network cards are Mellanox MCX354A-FCBT and have 2 QSFP ports that can be configured as Infiniband or Ethernet. My question is how best should I utilize the two ports. My options are:
1) LACP into VPC spaning two Cisco Nexus 3132Q switches
2) eth0 into Cisco Nexus 3132Q and ib1 into MSX6036F switch
The 2nd option gives me 56 gbit/s Infiniband dedicated for Ceph, but Infiniband RDAM is new with ceph and I am not sure its worth it.
1) LACP into VPC spaning two Cisco Nexus 3132Q switches
2) eth0 into Cisco Nexus 3132Q and ib1 into MSX6036F switch
The 2nd option gives me 56 gbit/s Infiniband dedicated for Ceph, but Infiniband RDAM is new with ceph and I am not sure its worth it.