LACP two 40 gbit/s eth or 40 gbit/s eth + 56 gbit/s infiniband

Nathan Stratton

Well-Known Member
Dec 28, 2018
43
3
48
48
I am upgrading a 16 node cluster that has 2 NVMe drives and 3 SATA drives used for ceph. My network cards are Mellanox MCX354A-FCBT and have 2 QSFP ports that can be configured as Infiniband or Ethernet. My question is how best should I utilize the two ports. My options are:

1) LACP into VPC spaning two Cisco Nexus 3132Q switches

2) eth0 into Cisco Nexus 3132Q and ib1 into MSX6036F switch

The 2nd option gives me 56 gbit/s Infiniband dedicated for Ceph, but Infiniband RDAM is new with ceph and I am not sure its worth it.
 
Hi,

Proxmox does not support RDMA for Ceph at the moment.
It is working as far as I know.
Infiniband will reduce the latency that increases the performance of ceph.

But as scenario 2 show a mix of eth and Infiniband this could be a problem. [1]
You must use RDMA RoCE on the eth too.
There are approaches for RDMA Bonding but I don't know if it is working with ceph.[2]

1.) https://en.wikipedia.org/wiki/RDMA_over_Converged_Ethernet
2.) https://mymellanox.force.com/mellan.../bonding-considerations-for-rdma-applications
 
Thank you very much for your reply, so it sounds like bonding two 40 gbit/s interfaces and not using RDMA is my best bet.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!