Is this an ideal setup for Ceph?

aradabir007

New Member
Nov 8, 2023
10
1
3
Hi,

I'm building a cluster with Proxmox mainly for running LXC CTs. I have 5 identical servers all connected through a 40Gbps switch. I also want to use the same servers for Ceph so they can share their local disks with each other.

In PVE Ceph wiki page it recommends 3 NICs for a high performance setup.

However, my servers have 2 NICs;
  • 4 port 1Gbps NIC on the mainboard
  • 2 port 40Gbps NIC PCIe network card
Currently one of the 40Gbps ports are in use and all servers have 10Gbps public internet connection from the switch through this port.

Does this mean I meet the requirements for high performance setup? Can I use 1x1Gbps port for corosync, 1x40Gbps port for Ceph+public internet and 1x40Gbps for internal Ceph traffic?

Would this be ideal? What are your thoughts?
 
Last edited:
Ideally you would use a dedicated network just for storage = SAN, so would use the a dedicated network interface only for for Ceph.
In general higher bandwith is a requirement for Ceph - I do not think that 1 Gbit is enought.
 
Ideally you would use a dedicated network just for storage = SAN, so would use the a dedicated network interface only for for Ceph.
In general higher bandwith is a requirement for Ceph - I do not think that 1 Gbit is enought.
In wiki it recommends 1Gbps for corosync. Why do you think it's not enough?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!