2.5GbE NIC - Intel or Realtek?

osalj

New Member
Mar 10, 2024
11
0
1
Hi,

I want to buy a 2.5GbE NIC for my Proxmox servers.

I wonder what would be a better choice...
I was thinking about Intel I226-V. Supposedly there are fewer problems with it than Intel I225-V.

Or maybe choose something with Realtek chipset? If so, which Realtek chipset works best with Proxmox?

Tia!
 
2.5GbE NIC
Why 2,5 and not 10? I really don't get it why choosing 2,5 over 10... :D

The recommendations are also valid for proxmox:
https://www.servethehome.com/buyers...liances/top-picks-pfsense-network-cards-nics/
https://www.servethehome.com/buyers...as-servers/top-picks-freenas-nics-networking/

which Realtek chipset works best with Proxmox
I don't know, but there are two drivers/branches. Sometimes one works better than the other. I know this from experience and from helping other people. It's true under FreeBSD, but it should also apply to Proxmox.

https://www.freshports.org/net/realtek-re-kmod/
https://www.freshports.org/net/realtek-re-kmod198/
 
Why 2,5 and not 10? I really don't get it why choosing 2,5 over 10... :D

The recommendations are also valid for proxmox:
https://www.servethehome.com/buyers...liances/top-picks-pfsense-network-cards-nics/
https://www.servethehome.com/buyers...as-servers/top-picks-freenas-nics-networking/


I don't know, but there are two drivers/branches. Sometimes one works better than the other. I know this from experience and from helping other people. It's true under FreeBSD, but it should also apply to Proxmox.

https://www.freshports.org/net/realtek-re-kmod/
https://www.freshports.org/net/realtek-re-kmod198/
Thanks buddy for the quick response!

Why did I choose 2.5GbE instead of 10GbE?
I'm answering now.

1. I already have 2.5GbE switches. Choosing 10GbE would require me to buy another switch, which would mean additional costs.

2. My Proxmox "servers" are HP Elitedesk 800 G4 DM 65W.
These are mini PCs that do not have PCIe slots. The only solution is to install a 2.5GbE NIC card in the M.2 A+E slot instead of Wifi

I'll check the links you provided right away
 
Ok, in this case....my links won't help.
I also don't know if there are many options with 2,5...and buying other servers would mean even more costs. :confused:
 
Most 10G cards can do 2.5G and 5G as well. The entire 2.5 and 5G realm is poorly supported by the vendors exactly because it is an in-between that never really came into fruition, for the same cost you can do 10G. I know there are problems with the Intel cards, mostly because a lot of them are knock-offs or not fit for purpose, the i22x 2.5G chipsets were intended for embedded (Atom) and PC (aka Windows) clients like laptops, so very customized by the manufacturer, not intended for general purpose or server. Because there is so little support by the big names, and the 2.5G switching fabrics are both cheap and frequently even non-standard, you run into weird bugs all the time (eg. no support for VLAN because few/no 2.5G switch has trunk support, and what is the purpose of VLAN at home)
 
Last edited:
Most 10G cards can do 2.5G and 5G as well. The entire 2.5 and 5G realm is poorly supported by the vendors exactly because it is an in-between that never really came into fruition, for the same cost you can do 10G. I know there are problems with the Intel cards, mostly because a lot of them are knock-offs or not fit for purpose, the i22x 2.5G chipsets were intended for embedded (Atom) and PC (aka Windows) clients like laptops, so very customized by the manufacturer, not intended for general purpose or server. Because there is so little support by the big names, and the 2.5G switching fabrics are both cheap and frequently even non-standard, you run into weird bugs all the time (eg. no support for VLAN because few/no 2.5G switch has trunk support, and what is the purpose of VLAN at home)
I know that the equipment I currently have is not very good for such applications, but for now I want to test the Ceph cluster on what I have.

If I decide to implement it permanently cluster with Ceph, then I will order other computers.
I am thinking about HP Elitedesk 800 g4 or G5, depending on the prices I can find them at.

Additionally, I would like to equip them with:
- two enterprise-class SSD drives for Proxmox in RAID1
- two 1TB NVMe drives for Ceph storage - two OSDs per node
- a dual-port 10GbE card - one port for the cluster network, the other port for the Ceph network
- a 2.5GbE card for the network for VM and Proxmox.

It seems to me that such a configuration would be efficient and stable for a home lab.
 
Last edited:
If you can go with something a bit newer, such as an HP Elitedesk 805 G8, then you can install a 10GbE card. I currently can only get about 6Gbps out of this particular card. But that might be pilot error on my part. I haven't tried too hard to optimize performance.
 
> Why 2,5 and not 10? I really don't get it why choosing 2,5 over 10

2.5Gbit is basically the last-gasp, drop-in replacement for 1Gbit equipment. Same Cat5E/CAT6 cables, and the switches have come down in price to where they are affordable and comparable to the 1Gbit. Haven't tried running a 2.5 router yet; I have all 3 speeds in place at home and my ISP supplied a 1Gbit cube for internet.

10Gbit is a bit of a different animal; you might get away with CAT5 cable, but it will run hot. You're better off with SFP+, and even then it gets a little convoluted bc some SFP+ adapters don't want to work with some card/switch combos. Plus fiber is a bit more fragile than copper. And more often than not, you have to use Jumbo frames = MTU 9000 to get the best speed out of it.

So 10G ends up being a whole separate net with re-wiring, whereas you might combine the 2.5 and 1Gbit nets and just replace the 1Gbit switches with 2.5 versions. Personally I opted to keep all 3 on separate /24 subnets, with dedicated same-speed switches. Don't really need a switch for 10G though, all of it goes thru my 4-port SFP+ Qotom proxmox box with a VM to provide DHCP.
But going back to OP's question - I've had zero problems with I225-V 2.5Gbit. Go with intel if you have pcie choice, otherwise you're pretty much limited to Realtek chipsets with USB3 NIC.

I can recommend two offhand that should work out of the box with proxmox:

https://www.amazon.com/gp/product/B09TB9TJ54/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1
^ RSHTECH

https://www.amazon.com/gp/product/B093FB9QWB?ie=UTF8&psc=1
^ ASUS

You might try both, one is a little cheaper.
 
10Gbit is a bit of a different animal; you might get away with CAT5 cable, but it will run hot.
I have had very good luck with the WiiTek 10G SFP+ copper transceivers on Amazon. Make sure to buy the 100m version. It comes with more modern Broadcom chips and needs less than 2W of power. That solves pretty much all of the temperature problems that 10G-over-copper used to be infamous for.
 
  • Like
Reactions: Kingneutron
So much misinformation on 10G.

You can still do 2.5G on most 10G NIC. There is little to no price difference on a 10G card today, 10G is “old” so the server equipment is being sold at bottom barrel prices. It’s about $30 for a decent Intel 10G NIC, even a dual-port for $50, the TP-Link 2.5G I can find is also $30, but note the brand, note that TP-Link last time I tried didn’t work with Cisco 2.5/5/10G ports, just a difference in spec interpretation, so it went to 1G.

Running 10G over CAT5e is possible for most ‘home’ environments, the crosstalk protection isn’t there but the frequency rating of 250MHz is so if you have a decent quality cable, especially FTP/STP it should work the same as CAT6 over short distances (<15ft) just make sure your cable is properly crimped.

For long running (>55ft) proper 10G over copper you need CAT6a, unless you are a professional, you don’t have that equipment at home to run CAT6a over those distances. Sorry, it won’t pass the spec test, the spec test machine from Fluke is $15k. Even most pre-molded cables only support CAT6a up to 55m (they are really CAT6). I have trouble all the time with professional cable installers that know CAT6a and still weird issues happen (connection drops every 10 minutes) then we test and they need to redo specific patches, as such, most datacenters now go with fiber, two reasons, easy to support higher speeds when you eventually get there, easier and cheaper to install. The individual conductor tolerance is I believe <2mm over 100m, that means from production of the cable to you installing (bending/pulling) and cutting the cable to crimp it you cannot pull any pair out of alignment or cut it slightly at an angle. There is specs for how hard you can pull on the cable, how much it can bend and how many can be in a conduit is also severely limited because the cable is so thick.

Fiber (multimode) is easy to install, the kit is relatively cheap, once you do a few, you’d be surprised how easy it is. It is easier than CAT5, it’s very similar in process to coax cable. The most fiddly thing about it is the tiny clips to make it into a duplex (it’s just plastic clips, nothing special). Any quality cable will be marked, so for a duplex, just calculate the length and cut it at the marking. Modern multimode is more tolerant to torque and bend than CAT6a, and much cheaper ($250/1000ft (500ft duplex)) than proper CAT6a copper ($300-500/250ft), the NIC and optics are dirt cheap and run less hot, a dual port 520 can be had on eBay for $15, the optics for $8/port in bulk.

That being said, yes, CAT6a runs slightly hotter, unless you are packing 100 in a raceway, you don’t have to worry about that, if your cable gets noticeably warm to the point of melting in your homelab, you have other issues.

MTU9000 (Jumbo Frames) is always weird, these days with interrupt coalescing and decent (server) NIC, you can run 100G or 400G at standard Ethernet frames. It may be slightly more efficient if you consistently need large packets, but it also has trade-offs. All switches and NIC can run full speed and have the pps capacity to run at regular 1500MTU because most of the Internet runs at or below 1500MTU, it’s been since the Pentium 4 era that 10G was still a consideration for jumbo frames.
 
  • Like
Reactions: zodiac
I have a similar unit as you do. I have a Elite Mini 800 G9 and have installed an Intel I226 M.2 A+E 2.5GbE where the wifi used to be and a 2.5GbE I225 FlexIO Port. My ISP plan is 2Gbps but I sometimes get up to 2.5Gbps over-provisioned.

I heard to stay away from Realtek and that is what I am trying to do for as long as I can. I know I don't have much room for upgrades, but this setup I have is perfect for right now.
 
I am currently testing two 2.5GbE network cards
the first is an M.2 card with an Intel I226-v chipset, and the second is a Cable Matters USB to 2.5G Ethernet Adapter with a Realtek 8156 chipset.

I want to set up a ceph cluster this weekend and use all three network cards.

1GbE - Proxmox management, internet access for VMs/LCXs
2.5GbE USB - cluster network
2.5GbE M.2 - Ceph network

If everything works stably and there are no problems, I might stick with Elitedesk DM.