[TUTORIAL] P2100G bad network performance in (docker) container

jsterr

Renowned Member
Jul 24, 2020
790
226
68
33
Last edited:
I am somehow riddeled how this actually is a Win, when the speed is still below 10G for a 100G NIC.
Ofc it is faster than before (1.8MB/s) but now it is 0.88GB/s, instead of 12.5GB/S.
Which is also not even the full speed of a 10G NIC (1.25GB/s)?

Is there a clear recommendation for Proxmox 8.3 with an AMD EPYC 7003-Series CPU in terms of NIC manufacturer or even a specific model of 100G mezzanine OCP 3.0?

Any recommendation/opinion on which of these is best?
- Broadcom N2100G [possible 200G (QSFP56) if only Port0 is activated]
- Intel E810-CQDA2 100G [2 x 100G (QSFP28)]
- NVIDIA Mellanox ConnectX-6 Dx [possible 200G (QSFP56) if only Port0 is activated]
- any other recommendation?

I am specifically looking for a NIC that can offload most things from the Host to the NIC itself and therefor also reaches at least 100G (12.5GB/s) speeds in Host-to-Host communication, as I want to use this as direct communication for a HighSpeed Ceph Cluster aswell as other communication between Proxmox Hosts.

I would really appreciate if someone could:
  1. provide info/benchmarks/tests/expirience for what is the best Mezzanine OCP 3.0 NIC for Proxmox with at least 100G
  2. provide info about which of the mentioned NICs reaches the highest throughput (with the same blocksize) and the best latency and offloads the most load from the CPU.
  3. is it recommended to use the 200G configuration for Ceph, or better both 100G ports? I am asking because "200G" often is realized by "50Gb/s (PAM4)" which I guess means that each stream/connection is limited to 50Gb/s but if you use 4 of them you will saturate the 200G?

Best regards and thanks in advance!
 
Last edited: