Ceph - NIC choise

TwiX

Renowned Member
Feb 3, 2015
311
23
83
Hi,

I plan to renew servers in order to build a 5 nodes Ceph PVE cluster.

I read some topics related to NICs, some users choose to switch to 100 Gb.

I was about to buy 2 nics cards with 2x25Gb each per server and split them like this :

- LACP 2x25 Gb for vmbr0
- LACP 2x25 Gb for Ceph (priv+pubic)

My 'old' 6 nodes Ceph PVE cluster have dedicated LACP 2x10 SFP+ (DAC cables) and bandwidth doesn't go above 500Mbps - 2Gbps for every interface

From my understanding, latency should be the same due to the usage of DAC cables.
Ok SFP+ may be not be enough now but I don't see the benefits to switching to 100Gb

advice needed ! :)

Thanks in advance

Antoine
 
It's really depend of number of osd and their speed.

If you have 12 nvme 6TB by node, recovery/rebalancing can really consume bandwith (but you can tune it of course to slow down the parallelism).
(1 good nvme drive is 2GB/s - 18gbit/s)

Personally I'm running 2x25gb too in production. (
100G qsfp ports, splitted in 4x25Gb with a breakout cable. Like this in the future , I can still upgrade some nodes to 100GB if needed.
 
  • Like
Reactions: sb-jw
Hi!

Wow 12 NVMe per node, no I plan to begin with 4 NVMe (1.6TB) per node.
100 GB breakout cable could be a way for future needs thanks ;)

How do you use each interface on your interfaces config file ? Pluging the breakout cable make them appear ?
 
Which Switches are you using?

I'm totally with @spirit , with NVMe you should definitely have 100 GbE as a basis. Whether you do 4x 25 GbE or not doesn't matter, but the NVMe can achieve a decent throughput, so the network shouldn't be a bottleneck.

Do you actually need 2x 25 GbE to connect the VMs? I would rather use more bandwidth for CEPH. In any case, your switch should be able to MTU 9000 and LACP Layer3+4 hashing.
 
Thanks,

I don't need 2x25 Gb to connect VMs indeed. But with 2x(2 x 25Gb) cards . there is 2x25 left for vms....
My switches can handle MTU 9000/Hash layer3+4 as well
 
A good option too could be to create one 4x25 Gb LACP (via 100Gb NIC and breakout cable or 25Gb cards) and work with VLANs....
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!