When I installed Ceph I had not yet created a separate network for the cluster. I have since done this and want to change the cluster network.
I have three nodes, all are configured as monitors, managers, and metadata servers.
I can't seem to find a known recommended procedure. This cluster is not in production yet, so there are no running VMs. However, I really, really don;t want to tear it down and start from scratch. In a perfect world, I would just need to change the config and restart each node. I have also read that all the OSDs need to be restarted.
I greatly appreciate the assistance.
Here is my config (changed a little from actual config for privacy)
I have three nodes, all are configured as monitors, managers, and metadata servers.
I can't seem to find a known recommended procedure. This cluster is not in production yet, so there are no running VMs. However, I really, really don;t want to tear it down and start from scratch. In a perfect world, I would just need to change the config and restart each node. I have also read that all the OSDs need to be restarted.
I greatly appreciate the assistance.
Here is my config (changed a little from actual config for privacy)
Code:
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = 192.168.100.5/24
mon_allow_pool_delete = true
mon_host = 192.168.100.5 192.168.100.7 192.168.100.8
ms_bind_ipv4 = true
ms_bind_ipv6 = false
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = 192.168.100.5/24
[client]
keyring = /etc/pve/priv/$cluster.$name.keyring
[mds]
keyring = /var/lib/ceph/mds/ceph-$id/keyring
[mds.district-pve1]
host = district-pve1
mds_standby_for_name = pve
[mds.district-pve3]
host = district-pve3
mds standby for name = pve
[mds.district-pve4]
host = district-pve4
mds_standby_for_name = pve
[mon.district-pve1]
public_addr = 192.168.100.5
[mon.district-pve3]
public_addr = 192.168.100.7
[mon.district-pve4]
public_addr = 192.168.100.8