Network connections sfp + 10gbe

Alexandre Aguirre

Well-Known Member
Apr 4, 2018
30
4
48
38
Passo Fundo - RS, Brasil
Hello guys !

I would like to take some questions regarding sfp + 10gbe network connections. You could tell me which models are natively recognized by Proxmox, and which connectors and gbics to use.
I am new in this area of fiber networks, I have very little knowledge about, I want to make a LAB with 3 nodes and 1 storage, and both with 10 gbe connection. I am grateful if someone can help me in my doubt.

Thanks in advance !
 
  • Like
Reactions: eronlloyd
Proxmox has nothing todo with this, in the hardest case some debian buster drivers.
I would generally recommend for 10GB/s some intel Nics, like x540/x550/x710 etc... The higher the number, the better xD

If you have the nics (available as 10Gbase-T & Sfp versions), you need corrisponding SFP+ modules. That means SFP+ modules that are "intel" compatible.
And on the other side (switch? or Server?), SFP+ modules that are compatible with that switch brand or the server nic.

Then the only thing thats important is, that you use either Singlemode or Multimode SFP+.
You can't mix it.
If you use singlemode, on a very short distance, you should use something like this: https://www.fs.com/de/products/70008.html (im sorry, i don't know the english term)
but 3db "muting" should be okay for under 50 Meters of singlemode cable.
You don't really need them, it will work without. But without the SFP+ modules would die sooner probably.

To make it easy, as an example:
Example: server (x550 nic) ---------- Cisco switch --------- server (x710 nic)
Option 1: Intel 10GBASE-T ----- 2x Cisco 10GBASE-T -------- Intel 10GBASE-T (All RJ45)
Option 2: Intel 10GBASE-SR ---- 2x Cisco SFP-10G-SR ----- Intel 10GBASE-SR (Multimode)
Option 3: Intel 10GBASE-LR ---- 2x Cisco SFP-10G-LR ----- Intel 10GBASE-LR (Singlemode)
Option 4: Intel 10GBASE-T --- Cisco 10GBASE-T / Cisco SFP-10G-SR --- Intel 10GBASE-SR (RJ45 one way/MM other way)

and so on...
you don't need the switch, if you do 2 server, you can connect them back to back xD

Cheers :-)
 
Proxmox has nothing todo with this, in the hardest case some debian buster drivers.
I would generally recommend for 10GB/s some intel Nics, like x540/x550/x710 etc... The higher the number, the better xD

If you have the nics (available as 10Gbase-T & Sfp versions), you need corrisponding SFP+ modules. That means SFP+ modules that are "intel" compatible.
And on the other side (switch? or Server?), SFP+ modules that are compatible with that switch brand or the server nic.

Then the only thing thats important is, that you use either Singlemode or Multimode SFP+.
You can't mix it.
If you use singlemode, on a very short distance, you should use something like this: https://www.fs.com/de/products/70008.html (im sorry, i don't know the english term)
but 3db "muting" should be okay for under 50 Meters of singlemode cable.
You don't really need them, it will work without. But without the SFP+ modules would die sooner probably.

To make it easy, as an example:
Example: server (x550 nic) ---------- Cisco switch --------- server (x710 nic)
Option 1: Intel 10GBASE-T ----- 2x Cisco 10GBASE-T -------- Intel 10GBASE-T (All RJ45)
Option 2: Intel 10GBASE-SR ---- 2x Cisco SFP-10G-SR ----- Intel 10GBASE-SR (Multimode)
Option 3: Intel 10GBASE-LR ---- 2x Cisco SFP-10G-LR ----- Intel 10GBASE-LR (Singlemode)
Option 4: Intel 10GBASE-T --- Cisco 10GBASE-T / Cisco SFP-10G-SR --- Intel 10GBASE-SR (RJ45 one way/MM other way)

and so on...
you don't need the switch, if you do 2 server, you can connect them back to back xD

Cheers :)
Thank you very much for the explanation, friend, it clarified me a lot ......... I will acquire it and soon I will return to give feedback.

Thank you for your attention!
 
In terms of "The higher the number, the better" I partially tend to disagree: While they are oftn more modern, i.e. the "grandpa" Intel X520 is now battle-tested. It will draw more power than a more recent X710, yet the Fortville-based NICs have much more firmware intelligence inside the which on early boards did have some weird issues (and with some bad luck if it is an OEM card getting newer firmware can be difficult) Older adapters usually have a lot of issues ironed out by now, so take that into consideration to.

Mellanox ConnectX-3 are also being phased out depending on your region in the world, while ConnectX-2 are starting to get pretty old. One other known vendor is Chelsio, but it is true that Intel-based are often easier to find. There are also other specialty brand like Solarflare (a vendor who used to specialize in low latency), some of them might have less general driver support.

But out of those 3 (Intel, Mellanox, Chelsio) should be able to get a solid choice of used cards
 
  • Like
Reactions: Alexandre Aguirre
Thats right, with the higher number is not always better, i seen that in case of i210/i211...
But in general a newer nic chipset should be better most of the time...
Just wanted to keep it simple
 
Last edited:
Good morning guys !

Thanks for the information .............. by any chance would any of you have a network scope using sfp + 10gbe used in projects with Proxmox?

I am from Brazil, and here for now I have not had the opportunity to meet projects with 10gbe network.

Grateful !

Very cool to be able to exchange ideas with you.
 
Proxmox has nothing todo with this, in the hardest case some debian buster drivers.
I would generally recommend for 10GB/s some intel Nics, like x540/x550/x710 etc... The higher the number, the better xD

If you have the nics (available as 10Gbase-T & Sfp versions), you need corrisponding SFP+ modules. That means SFP+ modules that are "intel" compatible.
And on the other side (switch? or Server?), SFP+ modules that are compatible with that switch brand or the server nic.

Then the only thing thats important is, that you use either Singlemode or Multimode SFP+.
You can't mix it.
If you use singlemode, on a very short distance, you should use something like this: https://www.fs.com/de/products/70008.html (im sorry, i don't know the english term)
but 3db "muting" should be okay for under 50 Meters of singlemode cable.
You don't really need them, it will work without. But without the SFP+ modules would die sooner probably.

To make it easy, as an example:
Example: server (x550 nic) ---------- Cisco switch --------- server (x710 nic)
Option 1: Intel 10GBASE-T ----- 2x Cisco 10GBASE-T -------- Intel 10GBASE-T (All RJ45)
Option 2: Intel 10GBASE-SR ---- 2x Cisco SFP-10G-SR ----- Intel 10GBASE-SR (Multimode)
Option 3: Intel 10GBASE-LR ---- 2x Cisco SFP-10G-LR ----- Intel 10GBASE-LR (Singlemode)
Option 4: Intel 10GBASE-T --- Cisco 10GBASE-T / Cisco SFP-10G-SR --- Intel 10GBASE-SR (RJ45 one way/MM other way)

and so on...
you don't need the switch, if you do 2 server, you can connect them back to back xD

Cheers :)
It's funny you recommend Intel. While there are a lot of Intel stability issues with the latest Linux kernel according to: https://forum.proxmox.com/threads/sfp-10g-network-card-for-a-new-proxmox-setup.136077/#post-603268

And indeed I find multiple resources complaining about stability issues with both Intel 5xx and 7xx series cards. So Intel chips of 12th and 13th chips aren't the only issues with Intel, Intel is also causing issues with their drivers on network cards.
 
It's funny you recommend Intel. While there are a lot of Intel stability issues with the latest Linux kernel according to: https://forum.proxmox.com/threads/sfp-10g-network-card-for-a-new-proxmox-setup.136077/#post-603268

And indeed I find multiple resources complaining about stability issues with both Intel 5xx and 7xx series cards. So Intel chips of 12th and 13th chips aren't the only issues with Intel, Intel is also causing issues with their drivers on network cards.
I had never issues with intel Cards, i had issues with multiple Mellanox ConnectX 4-LX Cards, that have issues with vlan's.
Actually all Mellanox ConnectX 4-LX Cards in the OCP Formfactor, while the normal PCIe formfactor had no issues. But im avoiding all of them.

Generally i do prefer actually intel Cards. No Clue about Broadcomm, since im always avoiding Broadcom Nics if i get Intel versions.
Otherwise i was once a huge fan of Broadcom Wifi Nics/Chipsets/Modems, but those times are over, since Broadcom doesn't Produce anything usefull anymore.

For lets say last 5 Years, im almost sticking with Intel (99%) and Mellanox ConnectX 5 or newer.
And i have like 14 Proxmox Servers alone with Intel Nics, but they are all with X550 / X710 / X810.
Intel Nics have only the bridge fdb table issue, if you use sr-iov, otherwise i never had any bug.

The only downside is, that there is a shitton of fake Intel Cards on the market.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!