What NICs 10Gb are compatible with Proxmox VE? and related matters

cesarpk

Well-Known Member
Mar 31, 2012
770
3
58
Hi

Please, anyone who can help?, google does not help me resolve these doubts

I need to know four things:

1- What NICs 10Gb/s copper and fiber is compatible with Proxmox last updated version? (preferably fiber)
2- Which of them can really reach that speed? (Or ​​very close)
3- Will I need memory RAM extra to use each?
4- Will I have problems if used with DRBD for replication?

Enormously grateful to anyone who can help me.

Cesar
 
Hi Cesar,
i have some solarflare cards, which are fast (5.33 Gbits/sec with iperf on an working system with also running drbd through this connection).
But I have also one issue now, which I haven't tracked down yet - so I can't recommend this cards right now.

Chelsio also work out of the box. Iperf from chelsio to an solarflare gives 5.63 Gbits/sec (and 5.41 the other way).

Udo
 
udo, thanks for your quick response, your comments are very important to me.

The servers are DELL and have HDDs SAS2 in RAID5, and I do not want to slow the replication of write of DRDB

And I would like to know the compatibility list of 10 Gb NICs that since installing Proxmox native can recognize, for example broadcom, etc (always speaking in base to NICs 10 Gb/s or similar).

Best regards
Cesar
 
06:00.0 Ethernet controller: Intel Corporation 82599EB 10 Gigabit Dual Port Backplane Connection (rev 01)

This is a Dell blade M910 card. Works perfectly, reached 9Gbit.
 
Thank you very much e100 and sigxcpu, it's great to know with that hardware works well, :)

In Paraguay there are not many options to buy, I can get (to import order) in the following brands: Dell, IBM, HP, and maybe (I'm not sure) Intel, and only for buy, not for test. :(

I visit this website: "https://hardware.redhat.com/" , but unfortunately for me I can only see certificates for servers, and practically no information of peripherals, is there any way to know at "Proxmox VE" their compatibility list?. Or at least of 10 Gb NICs.

I think it would be great if Proxmox VE displayed on his website the Server/Peripherals compatibility list in native mode, and which are proven that it really works.

And the last question:
Is possible to Connect "Proxmox VE" with a "SAN Switch" (for example with "HP StorageWorks SAN Switch 2/16V) and setup with HA? please see:
http://h20000.www2.hp.com/bizsuppor...en&cc=us&prodTypeId=12169&prodSeriesId=402271

I would appreciate any help

Best Regards
Cesar
 
Last edited:
The Proxmox kernel is based on RedHat kernel so if something works on RedHat it very likely works on Proxmox.

I have been using Infiniband for DRBD replication and the Proxmox cluster communications.
Mellanox MHEA28-XTC have been working great for over six months now.

e100 thanks for your clarification, I'll remember it

Best regards
Cesar
 
Last edited:
Thanks Udo...but how do you assign an IP to an FC-HBA...I think about using FC-Hardware for DRBD and fast Live-Migration...

udo, e100 or anybody please help!

How should configure?, Could be more specific?

Thank you in advance to anyone who can clarify this doubt.

Best regards
Cesar
 
udo, e100 or anybody please help!

How should configure?, Could be more specific?

Thank you in advance to anyone who can clarify this doubt.

Best regards
Cesar

well, you can't really assign an "IP to FC" - though there is a standard (IPFC) for that, i do not know of any non-proprietary implementation. Think of FC as a sort of SCSI - similar to iSCSI, you can do FC-over-IP (FCoE), but not the opposite.
If you want to use IP and 10GBit, use 10GB-Ethernet, or Infiniband.

We've some cheap Infiniband cards working (Mellanox MT25204) - iperf says 5.38GBit/sec without any "tweaking".
 
The Proxmox kernel is based on RedHat kernel so if something works on RedHat it very likely works on Proxmox.

I have been using Infiniband for DRBD replication and the Proxmox cluster communications.
Mellanox MHEA28-XTC have been working great for over six months now.

One question please e100 or anybody!!!:

RedHat have proprietary firmware to work properly with certain hardware boards, then Proxmox VE also ???
Please see the link: http://pve.proxmox.com/wiki/Scsi_boot_(Guest)
It says literally: "As workaround, you need to download the lsi .rom. (this rom in not GPL, so we cannot include it in proxmox by default)"

Finally with this issue I'm confused.

Best regards
Cesar
 
Booting Guests with SCSI disks is totally unrelated to this.

Hi Dietmar

The relationship is on the use of GPL software in Proxmox VE, and should I deduce that Proxmox VE does not include proprietary firmwares (or proprietary drivers)?...

...If it is correct, not all hardware supported by RedHat should be supported on Proxmox VE.

I would appreciate if you can dispel this doubt.

Best Regards
Cesar
 
AFAIK we include all necessary firmware drivers.

Thanks Dietmar for your answers.

I am a great admirer of your work and your team. Proxmox VE is a work very sophisticated.

If I visit your country someday, although you not want it, I will treat myself meet you and shake your right hand.:p
The Proxmox team are the champions !!!

Best regards
Cesar
 
Last edited:
06:00.0 Ethernet controller: Intel Corporation 82599EB 10 Gigabit Dual Port Backplane Connection (rev 01)

This is a Dell blade M910 card. Works perfectly, reached 9Gbit.

Hello sigxcpu

I'm interested in doing bond and bridge with two Intel Ethernet controller 10 Gb for DRBD, but visiting this website:
http://downloadcenter.intel.com/con...g/ixgbe-3.10.17.tar.gz&lang=spa&Dwnldid=14687

And reading the readme file for this driver, says literally:
WARNING: The ixgbe driver compiles by default with the LRO (Large Receive
Offload) feature enabled. This option offers the lowest CPU utilization for
receives, but is completely incompatible with *routing/ip forwarding* and
*bridging*. If enabling ip forwarding or bridging is a requirement, it is
necessary to disable LRO using compile time options as noted in the LRO
section later in this document. The result of not disabling LRO when combined
with ip forwarding or bridging can be low throughput or even a kernel panic.

Do you have any experience with that?,
and what is your suggestion for my purpose?....
please if are in your possibilities, explain step by step !!!


Best regards
Cesar
 
Last edited:
Sure, we compile the driver without that option.

Thanks Dietmar for your prompt response.

And I guess it has done the same for all models of Intel NICs and other manufacturers, is that correct?

Best regards
Cesar
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!