Dear,
I need to modify the management IP of my Cluster nodes. I want to put them on another subnet.
The Cluster is already formed and Ceph is working on another subnet.
There is a physical network interface connected to the vmbr0 bridge in a subnet (192.168.0.0/24), where the virtual machines and cluster management worked
There is also another interface on another subnet (10.10.10.0/24) where the cluster and the Ceph public interface were installed. Finally, there is a third subnet (10.0.0.0/24) where Ceph travels data between OSDs.
Now, I need to change the node-only management address. I need you to stay on another subnet (172.16.1.0/24).
Does the procedure remain the same?
I tried to do as suggested here, modifying only three files. However, I did not need to modify the "corosync.conf", since the communication of the Cluster is already in the separate and correct subnet, that is, this subnet will not be changed.
So I applied the change only to the vmbr0 interface in /etc/interfaces and in the /hosts file and rebooted on all nodes. But something doesn't work right.
I can access the web administration interface, I can login via SSH normally on all nodes, separately. But when I try to open the Summary of a remote node on the Cluster Web administration screen, an icon awaiting information appears and gives an error message "no route to host (595)". This also occurs when I try to access the console of a remote node through the Shell in the Cluster web administration screen, it no longer opens. Opens only from the local node. When I try to open the remote node, it opens the black screen with the following message:
Code:
ssh: connect to host 192.168.0.37 port 22: No route to host
This (192.168.0.37) is the old address of the node. That is, it is still trying to connect with the old address. The new administration address for the node is 172.16.1.12 (the subnet 172.16.1.0/24 was created for cluster administration only).
Apparently, there is some place that has not been updated. The file /etc/corosync/corosync.conf remains with the old and correct address that is from another subnet, because nothing has changed there.
Code:
logging {
debug: off
to_syslog: yes
}
nodelist {
node {
name: node1
nodeid: 1
quorum_votes: 1
ring0_addr: 10.10.10.11
}
node {
name: node2
nodeid: 2
quorum_votes: 1
ring0_addr: 10.10.10.12
}
node {
name: node3
nodeid: 3
quorum_votes: 1
ring0_addr: 10.10.10.13
}
node {
name: node5
nodeid: 4
quorum_votes: 1
ring0_addr: 10.10.10.15
}
}
quorum {
provider: corosync_votequorum
}
totem {
cluster_name: cluster2
config_version: 4
interface {
linknumber: 0
}
ip_version: ipv4-6
link_mode: passive
secauth: on
version: 2
}
Does anyone know how to fix?