Installing drivers NIC Mellanox ConnectX-5 Dual 100Gbps

Meoma

New Member
Apr 6, 2023
3
0
1
I am a beginner with Linux (more or less) and also quite new to Proxmox.
I´ve installed the latest version of Proxmox today, one NIC with dual SFP+ ports showed up without any trouble.
But the Mellanox ConnectX-5 didn´t show up at the installation nor did it when all was installed.

I´ve been searching around the internet for about 5 hours now and can´t find a easy fix for this.

Kernel modules: mlx5_core
Part number: STA7A37060
Infiniband controller: Mellanox Technologies MT27800 Family [ConnectX-5]

I´ve found this website from NVidia: https://network.nvidia.com/products/ethernet-drivers/linux/mlnx_en/
And downloaded the file: MLNX_OFED_LINUX-5.8-2.0.3.0-ubuntu22.10-x86_64
As I have read that proxmox-ve_7.4-1 is based on Ubuntu server.

Can someone give me guidance please?
Thank you very much.
 
what traffic will use the mellanox cards in your setup? I see u only have one card. The thing is this: U dont find any support for use mellanox with Debian. there are only howtos for Redhat. If u wanne use the cards for the storage connection in a HCI enviroment then u can use a Rocky VM with passthrough for hba controller and network. Otherwise, I know there is a german company that sells RDMA solution for proxmox based on mellanox.
 
Last edited:
what traffic will use the mellanox cards in your setup? I see u only have one card. The thing is this: U dont find any support for use mellanox with Debian. there are only howtos for Redhat. If u wanne use the cards for the storage connection in a HCI enviroment then u can use a Rocky VM with passthrough for hba controller and network. Otherwise, I know there is a german company that sells RDMA solution for proxmox based on mellanox.
Thanks for your reply.
Actually I have 1 dual port 100gb nic. So two ports one card indeed.

The idea is to run a VM with RouterOs from Mikrotik. Only for that purpose.
Since RouterOs doesn’t support the nic drives either I didn’t install it directly without proxmox.

Traffic, not sure this is preparation for the future. Right now the traffic that will pass is about 10-20gbit maybe more in a few years.

I don’t mind to buy something to make it work or pay someone to help me out.
 
Hi,
I'm running a lot of mellanox connectx4-lx && connectx5-lx in production, and they are working out of the box with kernel mlx5_core driver.

OFED driver is only needed if you need to use RDMA, but for ethernet, you don't need to install any additional drivers. (proxmox 6.X or 7.X)
 
Are you trying to deploy 100 Gbps on the Proxmox host itself or through a VM?

(you can see my thread here for more details/information: https://forum.proxmox.com/threads/here-is-how-you-can-get-100-gbps-infiniband-up-and-running.121873/)

If you are trying to deploy it on the host - here are the commands that I used to get my Mellanox ConnectX-4 dual port 100 Gbps IB card working:

# apt install -y infiniband-diags opensm ibutils rdma-core rdmacm-utils

One of these will enable the IB device. Not really sure which one or all of them, but it gets it up and running.

Code:
# modprobe ib_umad
 
# modprobe ib_ipoib

Check that the IP address can be defined

# ip a

You should see something like this:

Code:
root@pve1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether 88:51:fb:5c:27:55 brd ff:ff:ff:ff:ff:ff
    altname enp0s25
3: ibs5f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 256
    link/infiniband 00:00:10:87:fe:80:00:00:00:00:00:00:24:8a:07:03:00:2b:1e:ce brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:0
0:ff:ff:ff:ff
    altname ibp4s0f0
    inet 10.0.1.160/24 scope global ibs5f0
       valid_lft forever preferred_lft forever
    inet6 fe80::268a:703:2b:1ece/64 scope link
       valid_lft forever preferred_lft forever
4: ibs5f1: <BROADCAST,MULTICAST> mtu 4092 qdisc noop state DOWN group default qlen 256
    link/infiniband 00:00:18:87:fe:80:00:00:00:00:00:00:24:8a:07:03:00:2b:1e:cf brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:0
0:ff:ff:ff:ff
    altname ibp4s0f1
5: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 88:51:fb:5c:27:55 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.160/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::8a51:fbff:fe5c:2755/64 scope link
       valid_lft forever preferred_lft forever

From there, you can assign the IPv4 address via the GUI in Proxmox.

Let me know if you need any additional help with this.

Thanks.

P.S. If you're trying to deploy it from within a VM, if you have a dual port card, you CAN pass through one of the two ports (or the entire card (i.e. both ports) if you really want to) as a PCIe device for your VM.

I have it at least partially up and running in a CentOS 7.7.1908 VM.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!