[SOLVED] Mellanox card questions

RobFantini

Famous Member
May 24, 2012
2,018
104
133
Boston,Mass
Hello

We've got to purchase 6 cards.

from reading forum posts, connect-x should be used.

ConnectX-3 have issues , something like with > 126 vlans.

ConnectX-4 & 5 are OK . However expensive.

my question:
are the earlier model ConnectX and Connectx-2 OK to use?



PS:
these cards are for a CEPH network.
and we use Topspin 120 switches.
 
Last edited:
Also we have decided to try to get our Intel 10G ethernet cards working rather then use IB.

the reason is this:

per ther Mellanox linux driver release notes , ConnectX-2 cards are not supported.

So say we invest in a ConnectX-4 set up. refurbished cards + new cables would cost around $5,000 . that is a lot for a mid sized company. It would be a good investment If I were certain there would be kernel drivers for the card in 5 years. Since I am not certain we will not use IB.
 
Mellanox has the ethernet code part in the Kernel.
Some IB modules are offtree.

I am a little confused about that - so have a couple of questions.

When a ConenctX-* card is used on a Proxmox node - is it always put to 'ehternet' mode ?

If I use an Infiniband switch with subnet manager - will the Mellanox IB code be needed ?

Also
I've been trying to research the differences between Mellanox, Solarflare and Chelsio cards.
Marketing and and words used make it hard to see the differences.
I am just looking for excellent ceph storage system hardware - so the backups and rsyncs and other things keeping the kernel busy in no way cause ceph issues like ' slow requests are blocked'
solarflare has 'Kernel Bypass' . [ in case you are not familiar see the short video at https://www.solarflare.com/ ]

Do you happen to know if Connect-X cards have the same type feature?
 
When a ConenctX-* card is used on a Proxmox node - is it always put to 'ehternet' mode ?
No, you have to set the mode with the mlxconfig tool.

If I use an Infiniband switch with subnet manager - will the Mellanox IB code be needed ?
Yes, because you use IP over IB what is basically IB and not eth.
The ethernet is done in the kernel on the nic and the Network you use IB.

solarflare has 'Kernel Bypass'
Most Verndors support DPDK.
But Proxmox VE has no built-in support for it.
 
Wolfgang - thank you for pointing out DPDK . I did not know what it was.

https://www.dpdk.org/ :
"DPDK is the Data Plane Development Kit that consists of libraries to accelerate packet processing workloads running on a wide variety of CPU architectures."

and as you noted many vendors have it available.

https://software.intel.com/en-us/networking/dpdk
"The DPDK is a set of libraries and drivers for fast packet processing. You can convert a general-purpose processor into your own packet forwarder without having to use expensive custom switches and routers."

So I'll check if DPDK will work with the Intel 10G cards we already use.
 
Based on the good history others have had with Mellanox, we ended up ordering refurbished Connect-X4 . Their Pre sales tech support was very good.. I am sure once set up correctly these will eliminate the possibility of network hardware being responsible for ceph slow requests.
 
I just wanted to add using sfp+ switch + connectX-* cards has made our ceph storage much more stable and way faster then using 10G ethernet switches and nics. cephs osd recovery after rebooting a node is 10 times faster.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!