Mellanox - iSER/Ceph

matthew-ka

Active Member
Oct 7, 2019
2
2
43
45
I'm having issues getting iSER over iWARP to work properly with Proxmox (or any other OS) and Intel has been very unhelpful about getting me to a working state. I do need the added bandwidth/latency offered by RDMA; so I'm looking into alternatives.

I'm making this post to get any input from other Proxmox users who have successfully gotten iSER and/or Ceph over RDMA to work properly in Proxmox VE 6. I'm looking at their connectx-5 25gbe adapters and their 2010 switch. Has anyone had any success getting this equipment to work with the above? If so, any gotchas?
 
Last edited:
I'm sorry for the late response, but I gave up on this thread after a month of no activity.

I was looking to support 3 hosts with multipath failover over iSER disks. The storage needed to be fast enough to swing over our old infrastructure which consists of 2 seperate HV clusters and a few baremetal servers. Each of the new hosts are utilizing these disks for file and database storage. The intention was also for plenty of room for growth.

I bit the bullet and purchased some connect-x5 cards and a mellanox onyx switch. The inbox drivers were mostly sufficient to get RDMA running between my hosts / storage. I did need to install a few ofed specific items. This was accomplished by installing Mellanox's firmware tools (MFT) and the ofed-kernel uttility from their OFED package. The Mellanox software was to enable LLDP and to set QOS for RDMA. The additional software wasn't necessary to get iSER functional.
 
Last edited:
  • Like
Reactions: dubios and Alwin