Ceph pve hyperconverged networking

davids01

New Member
May 13, 2025
3
0
1
Hi,

Ceph docs suggest single cluster network is sufficient.

https://docs.ceph.com/en/squid/rados/configuration/network-config-ref/

Whilst pve suggests separating the two - which makes sense if in doubt. And I'm assuming is suggested because of situations where hardware capability/nic capability varies. E.g. the public and cluster might not be able to go on equally capable interfaces.

https://pve.proxmox.com/pve-docs/chapter-pveceph.html


I just wanted to double check this makes sense for my setup:

3x Dell servers with 4x1.92TB SSD for Ceph, 4-port 25GbE, 2-port-1GbE:

-separate dedicated corrsoync ring0 on 1GbE copper OOB switch
-2x25GbE ports dedicated to Ceph* private - dedicated network on 100GbE switch with breakouts
-ports 3 and 4 on these 4-port server NICs will be used for other networks required for VM access (not Ceph).
-corrosync ring1 on either port3 or 4, TBC


*these two will be LACP bonded and carry a single Ceph network

Just trying to establish if this will be the right move - going single Ceph network on the 2x25GbE ports. Because the ports are identical in capability i was thinking no need to separate public/cluster Ceph. Since the corrosync is separated from the Ceph and I believe LACP has minimal latency impact i was leaning to this approach for simplicity.

thanks
 
you explicitly named the private CEPH network but where will your Ceph public network be?

FYI: depending on the SSDs, 25 Gb could be the bottleneck. A single PCIe 3.0 NVMe will outperform your 25 Gb link.
 
ah apologies ... i meant single Ceph-public network (no separate ceph cluster network), using private ips on an isolated layer2 vlan

I think the pve install will default to something like 10.10.10.0 with a single ceph network

the SSDs are SATA