Any 40 or 100 gig cards work out of the box with Proxmox VE?

WSDJamie

New Member
Nov 23, 2022
13
1
3
I have a Proxmox host that has been up and running for awhile using a 10 gig nic. 10 gig nic is named ens2f1np1 and it's is the bridge port for vmbr0.

Last week I installed a 40 gig MCX354A-FCCT Mellanox ConnectX-3 Pro. Picked up a Mellanox compatible QSFP+ from FS.com as well as Juniper compatible QSFP+

Got the physical connection going and have link lights between the Juniper and the Mellanox card.

Proxmox seems to recognize the 40 gig card and it's named enp130s0. So everything looked good.

So I changed the bridge port from ens2f1np1 to enp130s0 and rebooted.

Unfortunately it's not working. I'm not sure where to go from here. Looking through some forum posts it seems that maybe the ConnectX-3 cards aren't supported by PVE 7.4-X anymore? Something about Debian dropping the drivers from their distro?

Anyone have any thoughts on why the ConnectX-3 isn't working? Should I be looking at getting a ConnectX-4 or 5?

Thanks!
 
Something about Debian dropping the drivers from their distro?
driver is kernel and the kernel is from ubuntu.

Anyone have any thoughts on why the ConnectX-3 isn't working?
Provide more information about what is not working, just stating that it does not is not sufficient to analyse your problem. Please provide more information (dmesg, ip, brctl output) in order to do anything.
 
We had no problems with Mellanox so far. We are using Mellanox Technologies MT27800 Family [ConnectX-5] 2x100Gb cards.
 
Here my Connectx-3 are still working with PVE 7.4 and 8.0. But yes, those are EoL and there will be no new software/drivers and the software isn't working any longer with Debian 12. So not a great thing to buy in 2023.
 
Last edited:
  • Like
Reactions: pvps1
Just to update this thread. I was able to get this working no problem with my Connectx-3 cards running at 40 gig. Turns out my issues were a combination of not selecting the correct nic in the interfaces config file and not having switch settings correct.
 
Mellanox has always been the golden standard. We are running some qualification testing with with CX6 and CX4 right now:
Code:
sudo lspci -nnvmm | egrep -A 6 -B 1 -i 'network|ethernet'
Slot:   01:00.0
Class:  Ethernet controller [0200]
Vendor: Mellanox Technologies [15b3]
Device: MT28908 Family [ConnectX-6] [101b]
SVendor:        Mellanox Technologies [15b3]
SDevice:        MT28908 Family [ConnectX-6] [0007]
PhySlot:        1
ProgIf: 00
--

Slot:   82:00.0
--
Slot:   c1:00.0
Class:  Ethernet controller [0200]
Vendor: Mellanox Technologies [15b3]
Device: MT27710 Family [ConnectX-4 Lx] [1015]
SVendor:        Mellanox Technologies [15b3]
SDevice:        Stand-up ConnectX-4 Lx EN, 25GbE dual-port SFP28, PCIe3.0 x8, MCX4121A-ACAT [0003]
PhySlot:        3
ProgIf: 00
--
Slot:   c1:00.1
Class:  Ethernet controller [0200]
Vendor: Mellanox Technologies [15b3]
Device: MT27710 Family [ConnectX-4 Lx] [1015]
SVendor:        Mellanox Technologies [15b3]
SDevice:        Stand-up ConnectX-4 Lx EN, 25GbE dual-port SFP28, PCIe3.0 x8, MCX4121A-ACAT [0003]
PhySlot:        3
ProgIf: 00


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!