full mesh network for ceph configuration

dbrega

New Member
Sep 6, 2025
4
0
1
I am following the guide
https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Broadcast_setup
In the last paragraph describing the BROADCAST SETUP configuration, I set up two network cards for each host to be used with ceph.
When I install and configure ceph, there are two fields to fill in the SETUP -> CONFIGURATION window:
- Public Network
- Cluster Network
Should I put the same bond in both, or would it be better to create two bonds, one for each setting?
 
Assuming it's a 3-node Ceph cluster with no switch per https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Example I put the Ceph public, private, and Corosync network traffic on this network.

I also set the datacenter migration network to use this network (either via GUI or CLI) and set it to insecure (via CLI).

So, to your question, set Public & Cluster network to the full-mesh broadcast network.

P.S. To really make sure this network traffic is never routed, I use 169.254.1.0/24 (IPv4 link-local address) for this network.
 
Yes my configuration are 3 nodes in full mesh configuration with ceph.
Each node have 8 ethernet port at 2.5GB speed.
This is node1 network configuration:
1763362040252.png
1763362122997.png

This is node2 network configuration:
1763362177258.png
1763362221506.png

This is node3 network configuration:
1763362283979.png
1763362324077.png

For ceph public network I used subnet class 10.10.10.0/29 with two ethernet port 2.5GB configured in bond (bond0) with bond mode set in broadcast
For ceph cluster network I used subnet class 10.10.11.0/29 with other two ethernet port 2.5GB configured in bond (bond1) with bond mode set in broadcast
For pve cluster I used subnet class 192.168.0.0/24 with one ethernet port 2.5GB
The other 3 ethernet port are available for use with virtual machine and business network

Is this configuration correct?
Can I improve anything with using full mesh configuration?