Is this an ideal setup for Ceph?

aradabir007

New Member
Nov 8, 2023
12
1
3
Hi,

I'm building a cluster with Proxmox mainly for running LXC CTs. I have 5 identical servers all connected through a 40Gbps switch. I also want to use the same servers for Ceph so they can share their local disks with each other.

In PVE Ceph wiki page it recommends 3 NICs for a high performance setup.

However, my servers have 2 NICs;
  • 4 port 1Gbps NIC on the mainboard
  • 2 port 40Gbps NIC PCIe network card
Currently one of the 40Gbps ports are in use and all servers have 10Gbps public internet connection from the switch through this port.

Does this mean I meet the requirements for high performance setup? Can I use 1x1Gbps port for corosync, 1x40Gbps port for Ceph+public internet and 1x40Gbps for internal Ceph traffic?

Would this be ideal? What are your thoughts?
 
Last edited:
Ideally you would use a dedicated network just for storage = SAN, so would use the a dedicated network interface only for for Ceph.
In general higher bandwith is a requirement for Ceph - I do not think that 1 Gbit is enought.
 
Ideally you would use a dedicated network just for storage = SAN, so would use the a dedicated network interface only for for Ceph.
In general higher bandwith is a requirement for Ceph - I do not think that 1 Gbit is enought.
In wiki it recommends 1Gbps for corosync. Why do you think it's not enough?