Anybody know Mellanox?

SpongeRob

Member
Nov 29, 2021
22
5
23
Co. Donegal, Eire
Last edited:
So I've come across this IBM 10GB NIC, reportedly not compatible with Windows.
Anybody with any thoughts/suggestions please?
To make it PVE related: if the NIC runs under Debian it will probably work with Proxmox. Create a corresponding bridge, add that VM, install virtio drivers and... call it a day.

Disclaimer: have have zero Mellanox experience...
 
Probably it's a VM under PVE ;-)
not very likely. he didnt mention anything about pve, passthrough, or anything that would even hint at using pve.

to me it reads like "wahhh my stoneage-nic from 2009 doesnt work on modern windows, someone help"

i mean why would you passthrough a nic to windows 11 anyway?
to a firewall i could understand, but windows 11? what advantage is that supposed to give.
 
i dont think this is proxmox-related at all.
looks to me like he got lost and ended up on the wrong forums :)
Right, so I'm a small home user. I've just boiked my PCI-E server controller card, I need to wait till between Feb-Mar for that to arrive from China. I've a spare PC besides me, but only a spare 60GB SSD. I am far more knowledgeable using Windows over Linux, so my first thought in terms of testing this 10GB NIC was to bare metal it under Windows.

Ultimately I'm using the time before my new controller arrives to dry run this 10GB NIC that may ultimately end up in my ProxMox server (if I can get it working) so apologies that I haven't written my life story but I can definitely confirm this IS ProxMox related!

My next step then, is to install ProxMox on this 60GB SSD and experiment. However, I thought I'd ask the community in the hope of somebody knowing about these Mellanox NICs and being able to tell me about them. It was a long shot, I admit.
 
ok. i know mellanox as brand. its now called nvidia.
the network cards are generally good, some of the best even.
the model you are having is just ancient and EOL in terms of driver support. that means in windows anything after probably windows 7(10 at best, with some hack maybe) wont be able to use it. the last official drivers are for windows server 2008 r2 which equals windows 7.

linux driver support is generally much longer lived, so the chance of your card working on linux are generally higher, but even there development for the drivers most likely stopped a long time ago and its possible that the generic mellanox drivers dont support this card anymore.

i had a connect-x2 card which i used on windows 7 for a few years but ultimately discarded it when windows 10 came out as i couldnt make the old drivers work anymore.

googling around a bit it seems like the MT26448 cards are hit and miss depending on what kernel is used and what firmware is on the card.
specific to proxmox you can find people having issues after using pve 6.x or newer while the card worked fine on older versions (and thus older kernels).

you will have to try if you can still make it work, but my recommendation would be a newer, more modern card.
for example there are rj45 and sfp+ versions of the brand new realtek rtl8127 cards available for relatively little money(40-80€ new).
they are available as pcie4.0x1 or pcie3.0x4 cards (and even m.2) and are brand new.
chances are high that they will be supported by linux for a considerable amount of time to come.

otherwise second hand enterprise-nics are available as well under 100€, namely the connect-x4, intel x520/x540/x550/x710.
i have seen connect-x4 cards as cheap as 40€ on aliexpress.
those should all still be reasonably supported on current kernels.
 
Last edited:
  • Like
Reactions: SpongeRob
Thank-you for shedding some light on this Beisser, your reply is very helpful/useful.. from my own perspective, if I can get away repurposing (ancient) hardware that I have 'in-house' then all the better. I have no real 'use case' other than homelab. I just begrudge spending money on something, that could be possible, if only I had the knowledge.

This is a great Segway actually, I'm definitely thinking about getting this 10GB networking going but maybe not using this IBM/nVidia card now.

I would want a 'kit' that could allow my Windows PC to directly communicate with my PVE Server, I have a 3M lead SPF lead, photo attached but this is all very new to me and I'm kinda lost.

spf.jpg

I understand SPF is the same throughput as Ethernet, and SPF+ is the one that will make a real difference in terms of file transfer speeds.

So initally, say three SPF+ adaptors and two 3M leads. What equipment would you buy, if you had the smallest budget?
 
Last edited:
that cable you have there is a 10Gbit SFP+ DAC, see https://www.telquestintl.com/site/Product Manuals/Cisco SFP-H10GB-CU3M data sheet.pdf?srsltid=AfmBOooKYtnMe1LMSQ5400OA3WbTuWxVY_VQeLUi238EJcclpWNfGfrD

so all you need is working sfp+ cards. the cheapest options atm are probably mellanox connect-x4 such as this: https://de.aliexpress.com/item/1005010348594703.html
you can see if you can maybe get them locally cheap.

alternatively something like these rtl8127 cards: https://de.aliexpress.com/item/1005010287958266.html
they are brand new though so driver support on linux is equally as new. i use rtl8127 atm on my windows desktop and have ordered more for some of my nodes, primarily because they have pcie4.0x1 as option whcih is great for boards that only have spare x1 slots available.

all these suggestions are just that, suggestions. you can just as well take the aforementioned intel cards, but those are more picky in terms of what sort of DAC/Transceiver they accept.

if you want more than a point to point connection i can also recommend that you look for the new cheap chinese 8 port 10gbit sfp+ switches, which are available starting for about 90€. im thinking of getting one of those myself, as i am running out of ports :)
that would enable you to interconnect more than 2 machines and have them all share the network.
 
IMHO, some of the best and cheapest options may be the Intel 82599EN based NICs, like the X520-DA1 and X520-DA2. Being so old, they usually have great driver support for almost any OS (FreeBSD, Linux and Windows alike, although for Windows 11, you must manually install the drivers, see: https://blog.lattemacchiato.dev/how-to-get-10gtek-nics-to-work-on-windows-11/).

They can be had dirt cheap and their only disadvantages are higher power draw and that they need a PCIe x4 (or x8 for the X520-DA2) slot, which often results in hogging an even larger slot. There are models out with slightly newer chip revisions which have less power draw and I have seen specimens of the DA1 variety that fits into a physical x4 PCIe slot, like this one: https://de.aliexpress.com/item/1005009901993047.html

These NICs are often included in SFP+-fitted pfSense/OPNsense N1x0 boxes and can be used with Proxmox as well. They are a cheap way to get 10 Gbps to an SFP+-capable switch.
 
Last edited:
yes, those intels are always an option but can be picky with transceivers as by default they only like intel branded transceivers and cables.
this can be disabled with allow_unsupported_sfp parameter (everyone should do that, why pay for intel stuff if noname works just the same).

me recommendation of the rtl8127 was specifically for homelabs, where you usually dont have too many pcie-lanes and x1 slots are relatively common on the desktop-boards used by many homelab users.
they may not be able to accomodate x4 or even x8 cards.
 
I use them for <= 3m DAC cables only, which are mostly unproblematic (using 4 different brands, all worked without allow_unsupported_sfp) - RJ45 transceivers get way too hot anyway and now there are cheap NICs that support 10 GBps which use way less power, like the RTL8127. Matter-of-fact, 2.5 Gbps is much the standard for longer cables and single links.

10G Optical is mostly for business applications - who has optical cabling in the building?
 
  • Like
Reactions: Johannes S
Now.. this is the wealth of information that I desire!! Love it, thank-you Beisser, Meyergru, very informative and helpful. I'm just making a second image of the W11 machine (in case it's ever needed) and I've downloaded the three ProxMox images (6.4, 8.4 & 9.1) so will experiment today working through those and testing this NIC.

Ofc, if that doesn't bear any fruits, then it looks like I'm going to start looking into those replacements that you guys have suggested :cool:

EDIT : Starting with the oldest, I'm now running a Debian Trixie VM inside of PVE 6.4

I've googled 'show me network adaptors' and came up with this - https://serverfault.com/questions/239807/how-to-list-all-physically-installed-network-cards-debian a wealth of different methods. However, I've realised, that I don't think this is going to help me. I fear my 60GB SSD (which wasn't a 60, it was, in reality, only a 40GB) is going to be too small for me to test with. Unless ofc I'm over looking something (always a possibility) and I'd appreciate some input please.

net.jpg

Ideally I'd like to crank up a 10GB connection, by connecting the NIC to my switch (which has 4 SPF ports) and then using some terminal black magic to see if that is working. However, in terms of actually doing this, I'm kinda lost :-(

Then by my repetition of moving forward in terms of PVE releases, to try the newer builds.

Please can somebody guide me forward?

 
Last edited:
  • Like
Reactions: Johannes S