Mellanox Network Cards on Proxmox 7.0

CTCcloud

Renowned Member
Apr 6, 2012
153
25
83
Since Mellanox doesn't yet support Debian 11, what are folks doing to get maximum performance out of their Mellanox fiber NICs?

Also, does anyone know how to turn Direct Memory Access on for Mellanox Card model: CX4121A? This is a 25Gb SFP28 dual port card.

Thanks in advance
 
No, RDMA is "Remote Direct Memory Access" .. talking about standard DMA on PCIe bus where card can read/write direct to memory. While doing firmware upgrades all Mellanox cards read not having DMA enabled

Also, since the OFED drivers don't yet exist for Debian 11, is there anything else to be done for performance? How has this affected folks?
 
Last edited:
My MCX311A-XCAT is working fine so far with PVE7 and Debian 11. But PVE is also not using the Debian 11 kernel but is using its custom kernel with own drivers based on the Ubuntu LTS kernel. But no idea if it will work fast. I'm using it basically on default configs.
 
Last edited:
Ok, because we swapped out 10Gb Base-T cards for these 25Gb and are seeing lower performance. Everything was much snappier with the 10 Gb and our Ceph is running entirely on NVMe drives which should be able to nearly saturate the 25Gb link if not entirely. In trying to find the answer, we can't use Mellanox utilities because they are designed for Debian 10 and older and we can't install OFED driver for the same reason. The NICs are obviously working but not with the performance we would reasonably have expected.
 
I just got answer back from Mellanox that DMA on their cards is set via the kernel/CPU so nothing to do manually apparently. When we upgraded the firmward for 12 NICs all of them complained that DMA wasn't enabled. We are currently scratching our heads .. that's why I started this thread. I'm still hoping that someone out there has an idea what could be the issue ... The Ubuntu kernel in use in Proxmox?
 
I'm also having issues with poor performance using Mellanox ConnectX-4 25Gb SFP28 cards on Proxmox 7.

Have you had any luck installing an OFED driver? How about the one intended for Ubuntu 21?
 
No, until OFED is fully certified for Debian Bullseye, we can't fool with that .. all our servers are crucial to day to day operations for our customers .. we can't use them for testing ... and we don't have extra equipment to try it on to see

If you decide to go that route, we'd be happy to hear how it went ... so far, we've been doing "ok" with the standard drivers included in the kernel .. not stellar performance but adequate at this point
 
I've got the C4/25Gb cards in some Dells, setup for LACP, and I can fill the ports just fine with iperf and nuttcp, what sort of performance problems are you seeing?

What server spec/pci-e slots do you have them installed in?

I assume you did the basic testing to eliminate storage and everything else, raw performance testing node to node across your switch, etc?
 
Last edited:
  • Like
Reactions: jsterr

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!