Ceph getting wrong public network number ?

Gilberto Ferreira

Renowned Member
Hello everyone.
I don't know if this is right or not, but everytime I deploy a Ceph Cluster, I have had noticed that the public network (and the cluster network as well) in /etc/pve/ceph.conf is something odd.
As you can see bellow, the public_network is set to 172.17.0.10/24 which is the first proxmox ve server.
Is it not suppose to be the network number, like 172.17.0.0/24 in this case?
What if I just change it, could cause some trouble?
Thanks!

P.S.: That's the configuration:
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = 172.18.0.10/24 ---------------------------------> Notice here is the same, the number of the first proxmox server
fsid = 7692aeeb-f551-4b49-a516-318a122f7204
mon_allow_pool_delete = true
mon_host = 172.17.0.10 172.17.0.20 172.17.0.30
ms_bind_ipv4 = true
ms_bind_ipv6 = false
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = 172.17.0.10/24 -------------------------------> Oh Look! It's here again!

[client]
keyring = /etc/pve/priv/$cluster.$name.keyring

[mds]
keyring = /var/lib/ceph/mds/ceph-$id/keyring

[mon.pve01]
public_addr = 172.17.0.10

[mon.pve02]
public_addr = 172.17.0.20

[mon.pve03]
public_addr = 172.17.0.30
 
Last edited:
cluster_network = 172.18.0.10/24
public_network = 172.17.0.10/24

Why would you want to change them? Is something not working?

The short explanation is to identify which network it is, you need the IP address and the netmask. In this case, with CIDR 24 it knows that the first 24 bits are the relevant ones (aka the network prefix). I would keep it the way it is.
 
The short explanation is to identify which network it is, you need the IP address and the netmask. In this case, with CIDR 24 it knows that the first 24 bits are the relevant ones (aka the network prefix). I would keep it the way it is.
Yes I realize that.
But my question is: why is there a host number instead a network number?
We need to have in mind that this configuration appears in all nodes across the cluster, right? It's a comon sense in a cluster.
I never noticed before, if the first server goes down, if the network number will change to the next available number.
I will check this.
Thanks for the answer, any way!
 
Last edited: