Full Mesh Network for Ceph best practice

Templerschaf90

New Member
Mar 27, 2026
1
0
1
Hello everyone

I have a 3 node cluster setup with 2 100Gb NICS per node. i set up the Mesh Network and works fine. I set up the Ceph and set private and public net to the same network.
My question: As far as i know its best practice to separate the public an private network for Ceph. Is this also true for Mesh Network? And if so, how do you configure that in a mesh setup?

Thank you.
 
Your setup is fine as-is — using the same network for both Ceph public and cluster traffic is the standard configuration for a full mesh.

The Full Mesh wiki page does exactly this: Ceph is initialized with a single mesh network (`pveceph init --network 10.15.15.x/24`). The separate 10.14.14.0/24 network in those examples is for PVE cluster/corosync, not a second Ceph network.

The "separate public and cluster networks" advice comes from switched environments where:
  • Links are 10–25 Gb and can be saturated when recovery/rebalancing competes with client I/O
  • You have switches with spare VLANs, so adding a second subnet is easy

In a 3-node full mesh at 100 Gb, neither applies — you have 100 Gb of dedicated bandwidth per node-pair, and all your mesh ports are already consumed (1 link per peer). To physically separate the networks, you'd need 4 mesh ports per node to build two independent meshes.

If you ever find that OSD recovery is impacting client I/O (unlikely at 100 Gb with 3 nodes, but worth knowing), you can throttle recovery instead:

Bash:
ceph config set osd osd_max_backfills 1
ceph config set osd osd_recovery_max_active 3

This limits how aggressively OSDs recover data after a failure or rebalance, keeping more bandwidth available for client traffic.