What NICs 10Gb are compatible with Proxmox VE? and related matters

What exactly is the problem?

Hi Dietmar and thanks for your interest in help me

Because Proxmox VE is a fantastic product, I recommend use Proxmox VE in several places for use with DRBD (protocol C).

But in this contrry there is no NICs 10 Gb/s or faster on store (essential when using for replication with DRBD and hard disks sas2), for purchase order can be imported, then I can not do test it before buying, this is why I have to be sure that it will work with NICS 10 GB/s or faster before recommending it.

In this country, I asked representatives of DELL, HP and Intel if they have NICs high speed and was told that only imported under purchase order :-(

Would be great if Proxmox VE have a list of compatible hardware.

So basically I wonder if Proxmox VE can work with all High Speed​​ NICs of IBM, HP, Intel, Dell and Dolphin (Dolphin have speeds of 20 and 40 Gb/s). These questions are because one company that have Dell Servers like to buy high speed NICs of the same brand, and the same goes for any company that have other brand.

- For Dolphin alliance with DRBD-Linbit please see this link:
http://www.dolphinics.com/solutions/dolphin_express_drbd_speedup_linbit.html
Maybe Proxmox VE can form an alliance with dolphin?
- For Dolphin products:
http://www.dolphinics.com/products/
- Example for dolphin 20 Gb/s:
http://www.dolphinics.com/products/pent-dxseries-dxh510.html
- Example for dolphin 40 Gb/s:
http://www.dolphinics.com/products/IXH610.html

I will appreciate in all that you can help me.

Best regards
Cesar
 
Last edited:
What exactly is the problem?

Hi Dietmar
Let me ask you any questions:

In this country I talked in a company that can import with purchase confirmation the plates 10 Gb/s of the brand Intel.

1- Therefore I would like to know which plates NICs 10Gb/s of brand Intel, Proxmox VE can work without any problems (copper and fiber)
2- The importer told me about a model of INTEL NIC 10 Gb/s: E10G42BTDA, and I do not know if this model is compatble with Proxmox VE

I would greatly appreciate if you can dispel my doubts.

Best Regards
Cesar
 
1- Therefore I would like to know which plates NICs 10Gb/s of brand Intel, Proxmox VE can work without any problems (copper and fiber)

You need to test that yourself, or ask on this forum if a specific model works.
 
You need to test that yourself, or ask on this forum if a specific model works.

Thanks Dietmar for your prompt response.

Let me ask you one question:
On Intel's website you can download the files "tar.gz" source code of any NIC, therefore I venture to buy untested. As you've compiled and added the Intel driver model "82599EB 10 Gigabit Dual Port", then I think you could explain me step by step how to compile and make the file installer .deb for his install based on this same model that you know over a clean installation of Proxmox VE ISO installer.

For example:
1- aptitude instal gcc make ... etc
2- wget (url of driver.tar.gz)
3- untar ...
4- etc.

This request is very important to me, because finally I want propel the use of Proxmox VE and I only depend of this for that I can implement or not Proxmox VE, and if I know this steps, I will be able to install on any brand of Servers with this brand of NICs

Enormously i will appreciate their valuable assistance

Best regards
Cesar
 
we always include latest driver from Intel (http://sourceforge.net/projects/e1000/files/).

so there is no need to compile this by yourself.

Thanks Tom for your prompt response. I appreciate your generous help with your information

But just as a precaution to any problems, would you kindly tell me how? please ...
I am applying this because I'm risking the purchase amount


Oh, I forgot, and any NIC of brand Intel of 10 Gb/s can work with any motherboard, say for example Asus Desktop?. I know about PCI-e but I don't know if these NICs have another hardware requeriments (for example some chipset or any another hardware, which is typical in NICs of brands such as IBM, HP, DELL, etc..)?

Best regards
Cesar
 
Last edited:
I see no reason why we should waste time to a non-existing problem. As long as you buy Intel cards you are on a good way to get a reliable server system.
 
I see no reason why we should waste time to a non-existing problem. As long as you buy Intel cards you are on a good way to get a reliable server system.

OK Tom, Thank you very much :D

And let me to make the last question:
Any NIC of brand Intel of 10 Gb/s can work with any motherboard, say for example Asus Desktop?. I know about PCI-e but I don't know if these NICs have another hardware requeriments (for example some chipset or any another hardware, which is typical in NICs of brands such as IBM, HP, DELL, etc..)?

Best regards
Cesar
 
Last edited:
no, not all cards works on all boards. but each vendor has a hardware compatibility list. as a general rule, never use desktop mainboards for servers. (24-7).

if you use desktop hardware, expect issues.
 
no, not all cards works on all boards. but each vendor has a hardware compatibility list. as a general rule, never use desktop mainboards for servers. (24-7).

if you use desktop hardware, expect issues.

Thank you Tom

I'll try to get more information about Intel NICs, and never thought to install on a desktop motherboard, I mention it only for ask better.

Best regards
Cesar
 
... as a general rule, never use desktop mainboards for servers. (24-7).

if you use desktop hardware, expect issues.
Hi Tom,
in this case I have a different meaning. For Raidcards and so on, yes of course, take server-grade hardware-raid-controller (but many serverboards has only fake-raids!!)
But I have some desktop mainboards like the asus sabertooth 990fx which has components with mil-standard and support ecc-ram. Runs very very stable!!
Much more issues i have with supermicro server mainboards... If I can choose I don't use supermicro...
Also there are server-boards in the field which don't support ecc-ram!

Another plus of consumer-hardware: there a a lot of them produced - the teething problems should be gone (I have seen a lot server-mainboards with crappy bios).
And it's easy to replace them if an error occur - with server-hardware it's not so easy - power supplys which only fit in the special cases and so on.
And if a node fail: for this I use shared storage and the fine pve-cluster! ;) ... and all hardware can fail (if I remember right I have more death server mainboards over the years than desktop-mainboards).

Udo
 
Hi Tom,
in this case I have a different meaning. For Raidcards and so on, yes of course, take server-grade hardware-raid-controller (but many serverboards has only fake-raids!!)

almost all boards fake fake-raid, desktop and server boards. but only on serverboards you have the possibility to use tested and certified hardware raid card. I am not aware of any vendor doing compatibility tests on desktop boards. and most desktop board do not have enough or suitable pci-express slots for controllers and additional nic´s.

But I have some desktop mainboards like the asus sabertooth 990fx which has components with mil-standard and support ecc-ram. Runs very very stable!!

glad to hear but this looks not like a "standard" board, more like a "workstation" board. and note, there is just no Intel Desktop CPU supporting ECC ram. Only server CPU (Xeon) supports ECC (AMD is different).

Much more issues i have with supermicro server mainboards... If I can choose I don't use supermicro...
Also there are server-boards in the field which don't support ecc-ram!

I do not know any server board without ecc ram. If there are such board out there, I would not call them server boards. Personally I had not much supermicro hardware in my hands, so I can comment on that - but it looks like you had some hard days with them ...

Another plus of consumer-hardware: there a a lot of them produced - the teething problems should be gone (I have seen a lot server-mainboards with crappy bios).
And it's easy to replace them if an error occur - with server-hardware it's not so easy - power supplys which only fit in the special cases and so on.
And if a node fail: for this I use shared storage and the fine pve-cluster! ;) ... and all hardware can fail (if I remember right I have more death server mainboards over the years than desktop-mainboards).

Udo

I agree, a working desktop board is better than a not-working server board. But its just not a general rule that using desktop boards is worry free and server boards are crappy. This thread is about NIC´s and especially on desktop boards the risk of a non-supported on-board nic is high - you know the threads about this topic all the years. for server boards its working for 99 %, and if there are issue we work to compile the needed driver asap - we do not do this for all desktop board nics by default, and also for some it isn´t possible at all.
 
06:00.0 Ethernet controller: Intel Corporation 82599EB 10 Gigabit Dual Port Backplane Connection (rev 01)

This is a Dell blade M910 card. Works perfectly, reached 9Gbit.
Hi sigxcpu

Please let me to do a questions:

Updated the questions:
1- How much will consumption processor if there is extensive use of disk (into partition DRBD of course)?
2- What processor do you use?, and
3- How many cores use to DRBD?

Best regards
Cesar
 
Last edited:
The Proxmox kernel is based on RedHat kernel so if something works on RedHat it very likely works on Proxmox.

I have been using Infiniband for DRBD replication and the Proxmox cluster communications.
Mellanox MHEA28-XTC have been working great for over six months now.


Hello e100

Please let me to do a questions:

Updated the questions:

1- How much will consumption processor if there is extensive use of disk (into partition DRBD of course)?
2- What processor do you use?, and
3- How many cores use to DRBD?

Best regards
Cesar
 
Last edited:
Hi Cesar,
yes FC work well - I use QLogic cards and also qlogic switches.

Udo

Hi udo (the Master of Masters)
or anyone can help me, please...

Humbly, please let me to do a questions, because nobody has answered me, and soon I will enter in an unfamiliar environment for me: the processor consumption using DRBD and one NIC 10 Gb/s for synchronous replication on local disks (for a project to short term, and I feel more lost than Tarzan in Star Wars)

Notes:
1- I know about of configuration optimized for DRBD with LVM and two VGs for HA
2- I know about bonding, and I will apply it (RR for DRBD)

The questions:
1- How much will consumption processor if there is extensive use of sas disks and I have NIC 10 Gb/s for use with replication DRBD in protocol c?
2- Experiences: What processor do you use for do it?, and
3- Experiences: How many cores do you use to DRBD?
4- Or if you don't know exactly the answers, could you give me an estimated? (if for you is possible, please answer the estimated with any specification about of the processor and cores used for DRBD)


Best regards
Cesar
 
Hi udo (the Master of Masters)
or anyone can help me, please...

Humbly, please let me to do a questions, because nobody has answered me, and soon I will enter in an unfamiliar environment for me: the processor consumption using DRBD and one NIC 10 Gb/s for synchronous replication on local disks (for a project to short term, and I feel more lost than Tarzan in Star Wars)

Notes:
1- I know about of configuration optimized for DRBD with LVM and two VGs for HA
2- I know about bonding, and I will apply it (RR for DRBD)

The questions:
1- How much will consumption processor if there is extensive use of sas disks and I have NIC 10 Gb/s for use with replication DRBD in protocol c?
2- Experiences: What processor do you use for do it?, and
3- Experiences: How many cores do you use to DRBD?
4- Or if you don't know exactly the answers, could you give me an estimated? (if for you is possible, please answer the estimated with any specification about of the processor and cores used for DRBD)


Best regards
Cesar
Hi,
drbd is an kernel-module - so you don't see realy how much cores/ram are used. But this isn't a problem.
I would say the powerconsumption is low!

Here is the time for an node where two resources run for app. 3 month (well used):
Code:
ps aux | grep drbd
root        6713  0.0  0.0      0     0 ?        S    Aug26  33:39 [drbd0_worker]
root        6740  0.0  0.0      0     0 ?        S    Aug26   0:00 [drbd1_worker]
root        6752  0.0  0.0      0     0 ?        S    Aug26  16:59 [drbd0_receiver]
root        6756  0.0  0.0      0     0 ?        S    Aug26 106:13 [drbd1_receiver]
root        6774  0.0  0.0      0     0 ?        S    Aug26  29:03 [drbd0_asender]
root        6775  0.0  0.0      0     0 ?        S    Aug26  36:30 [drbd1_asender]
The processor is an AMD X4 965. With an Opteron 6136 it's looks similiar.

BTW: I don't use encryption between the nodes!

Udo
 
...
BTW: I don't use encryption between the nodes!

Hi udo
Thank you very much for your answer, your answer is very comforting.

But about of your commentary of don't use encryption:
1- in this link: http://www.drbd.org/users-guide/re-drbdconf.html
You will see that literally says: "We suggest to use the data-integrity-alg only during a pre-production phase due to its CPU costs. Further we suggest to do online verify runs regularly e.g. once a month during a low load period."

So, i believe that if i have that run online verify monthly is because i have no security that the data are the same on both Nodes. And what will happen to my data if a node drops unexpectedly?

2- In my workshop, every time I enable or disable the verification, synchronization runs. I don't know In the background what is happening?

I would listen of a master like you, your comments about this.

Best regards
Cesar
 
Hi udo (the Master of Masters)
or anyone can help me, please...

Humbly, please let me to do a questions, because nobody has answered me, and soon I will enter in an unfamiliar environment for me: the processor consumption using DRBD and one NIC 10 Gb/s for synchronous replication on local disks (for a project to short term, and I feel more lost than Tarzan in Star Wars)

Notes:
1- I know about of configuration optimized for DRBD with LVM and two VGs for HA
2- I know about bonding, and I will apply it (RR for DRBD)

The questions:
1- How much will consumption processor if there is extensive use of sas disks and I have NIC 10 Gb/s for use with replication DRBD in protocol c?
2- Experiences: What processor do you use for do it?, and
3- Experiences: How many cores do you use to DRBD?
4- Or if you don't know exactly the answers, could you give me an estimated? (if for you is possible, please answer the estimated with any specification about of the processor and cores used for DRBD)


Best regards
Cesar

Hi Cesar,

Hopefully the following information will help answer your questions....

I am using IPoIB (IP over Infiniband) which uses CPU to deal with the TCP/IP stack.
I *believe* that most 10G Ethernet cards have TCP/IP processing on the cards thus reducing the main cpu usage. Since i do not have any 10G Ethernet cards this is only speculation based on things I have read.
Regarding CPU usage, there is some CPU usage when DRBD is replicating.
I have not specifically measured this CPU usage but it does not seem to be very significant, less than one CPU core of usage.

We are using IPoIB on some Phenom II x6 cpus and some Ivy Bridge (6 core) and Sandy Bridge (8 core) Xeon CPUs.
The Sandy bridge CPU servers perform best but that is likely due to those servers having 64GB RAM and quad channel RAM where the others only have 16-24GB RAM.
I also think that having a good RAID card with BBU helps with DRBD speed, we use Areca 1880 and 1882 cards with 4GB RAM and BBU.

Our latest setup using Areca 1882 and SSD disks replicates sequential writes at about 900MB/sec. Infiniband uses 8B-10B encoding leaving only 8Gbps (1GB/sec) for data. By the time you include TCP/IP and DRBD protocol overhead 900MB/sec is about the maximum speed possible when using 10G IB.
 
Hi e100 - a Master of Masters

Thanks for sharing your experiences with me :p

Apparently processor consumption should not be a concern for me, now I feel much calmer, and I will do my tests timely.

Best regards
Cesar
 
The Proxmox kernel is based on RedHat kernel so if something works on RedHat it very likely works on Proxmox.

I have been using Infiniband for DRBD replication and the Proxmox cluster communications.
Mellanox MHEA28-XTC have been working great for over six months now.

Hello e100,

How did you get Proxmox to recognize the Mellanox MHEA28-XTC? I have installed these and for none of the servers could recognize it. It does not show up in the Proxmox GUI or IFCONFIG.

Any help would be greatly appreciated. Thank you.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!