Change Public Network and / or Cluster Network in Ceph

Oct 21, 2020
103
6
23
37
Just to be sure.

To change the Public Network and / or Cluster Network in Ceph you can modify the Cheph configuration file:
Code:
/etc/pve/ceph.conf

For example, my actual configuration on my test machine:
Code:
[global]
         auth_client_required = cephx
         auth_cluster_required = cephx
         auth_service_required = cephx
         cluster_network = 10.1.0.245/22
         fsid = dc0e8f62-4ab8-441b-9891-0cf905b52e87
         mon_allow_pool_delete = true
         mon_host =  10.1.0.245
         osd_pool_default_min_size = 2
         osd_pool_default_size = 2
         public_network = 10.1.0.245/22

[client]
         keyring = /etc/pve/priv/$cluster.$name.keyring

[mon.pveZZ]
         public_addr = 10.1.0.245

If I want to change the networks I will have to chenge cluster_network and public_network:
Code:
[global]
         auth_client_required = cephx
         auth_cluster_required = cephx
         auth_service_required = cephx
         cluster_network = 10.10.10.1/25
         fsid = dc0e8f62-4ab8-441b-9891-0cf905b52e87
         mon_allow_pool_delete = true
         mon_host =  10.1.0.245
         osd_pool_default_min_size = 2
         osd_pool_default_size = 2
         public_network = 10.10.20.1/25

[client]
         keyring = /etc/pve/priv/$cluster.$name.keyring

[mon.pveZZ]
         public_addr = 10.1.0.245

Probably it will need a reboot.

Am I missing something?
 
Read ceph documentation about changing public_network because monitors depends on it.

On other side, changing cluster_network is almost only changing in config and restart.
 
Read ceph documentation about changing public_network because monitors depends on it.

So you need to chanche also "mon_host" and "public_addr":

Code:
[global]
         auth_client_required = cephx
         auth_cluster_required = cephx
         auth_service_required = cephx
         cluster_network = 10.10.10.1/25
         fsid = dc0e8f62-4ab8-441b-9891-0cf905b52e87
         mon_allow_pool_delete = true
         mon_host =  10.10.20.1
         osd_pool_default_min_size = 2
         osd_pool_default_size = 2
         public_network = 10.10.20.1/25

[client]
         keyring = /etc/pve/priv/$cluster.$name.keyring

[mon.pveZZ]
         public_addr = 10.10.20.1
 
So you need to chanche also "mon_host" and "public_addr":

Code:
[global]
         auth_client_required = cephx
         auth_cluster_required = cephx
         auth_service_required = cephx
         cluster_network = 10.10.10.1/25
         fsid = dc0e8f62-4ab8-441b-9891-0cf905b52e87
         mon_allow_pool_delete = true
         mon_host =  10.10.20.1
         osd_pool_default_min_size = 2
         osd_pool_default_size = 2
         public_network = 10.10.20.1/25

[client]
         keyring = /etc/pve/priv/$cluster.$name.keyring

[mon.pveZZ]
         public_addr = 10.10.20.1

Hi @vaschthestampede,

Sorry to reopen this topic, but have you made it? Have you changed the "cluster_network" without issues?

I'm asking this because I'm about to change my hosts configuration to add the cluster network now. We only configured the "public_network" (but it's on a private network); so, I just need to add the "cluster_network" the "global" section and add the "cluster_addr" in the "mon.xxxxxx" sections? Is it just that?

EDIT: I just need to change the required parameters in "/etc/pve/ceph.conf" and reboot?

Thank you very much.
 
Last edited:
Yes, the test was successful.
Hi @vaschthestampede,

I've changed the configuration, saved it, it replicated the configuration to the other hosts in the cluster with no issues, but I still have no traffic at all in the cluster network. Do I need to change something else?

Here is my lab config (pretty much default), just for reference (already with the cluster network in place):
Code:
[global]
     auth_client_required = cephx
     auth_cluster_required = cephx
     auth_service_required = cephx
     fsid = 5dfa841b-xxxx-xxxx-xxxx-xxxxxxxxxxxx
     mon_allow_pool_delete = true
     mon_host = 10.0.1.221 10.0.1.222 10.0.1.223 10.0.1.224 10.0.1.225 10.0.1.226
     ms_bind_ipv4 = true
     ms_bind_ipv6 = false
     osd_pool_default_min_size = 2
     osd_pool_default_size = 3
     public_network = 10.0.1.221/24
     cluster_network = 172.16.172.1/24

[client]
     keyring = /etc/pve/priv/$cluster.$name.keyring

[mds]
     keyring = /var/lib/ceph/mds/ceph-$id/keyring

[mds.pve1]
     host = pve1
     mds_standby_for_name = pve

[mds.pve2]
     host = pve2
     mds_standby_for_name = pve

[mds.pve3]
     host = pve3
     mds_standby_for_name = pve

[mds.pve4]
     host = pve4
     mds_standby_for_name = pve

[mds.pve5]
     host = pve5
     mds_standby_for_name = pve

[mds.pve6]
     host = pve6
     mds standby for name = pve

[mon.pve1]
     public_addr = 10.0.1.221
     cluster_addr = 172.16.172.1

[mon.pve2]
     public_addr = 10.0.1.222
     cluster_addr = 172.16.172.2

[mon.pve3]
     public_addr = 10.0.1.223
     cluster_addr = 172.16.172.3

[mon.pve4]
     public_addr = 10.0.1.224
     cluster_addr = 172.16.172.4

[mon.pve5]
     public_addr = 10.0.1.225
     cluster_addr = 172.16.172.5

[mon.pve6]
     public_addr = 10.0.1.226
     cluster_addr = 172.16.172.6

Thank you very much in advance. Best regards.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!