Changing the IP nodes does not work after cluster creation

Sep 14, 2020
54
6
13
46
Hello.

I modified the IP of a node in my cluster before joining the cluster and everything was working normally.

I modified it only in the web administration screen and in the '/etc/hosts' file.

It turns out that after creating the cluster, I realized that I wanted to modify the IP of the nodes again, to organize the network. I wanted to separate the cluster administration into a new subnet.

It was then that I researched and found instructions saying to change the addresses in 3 files: "/etc/network/interfaces"; "/etc/hosts" and "/etc/pve/corosync.conf".

I made changes to all nodes and rebooted them all. But it didn't work as expected.

I'll explain the context:

The Cluster is already formed and Ceph is working on another subnet.

There is a physical network interface connected to the vmbr0 bridge in a subnet (192.168.0.0/24), where the virtual machines and cluster management worked

There is also another interface on another subnet (10.10.10.0/24) where the cluster and the Ceph public interface were installed. Finally, there is a third subnet (10.0.0.0/24) where Ceph travels data between OSDs.

Now, I need to change the node-only management address. I need you to stay on another subnet (172.16.1.0/24).

Does the procedure remain the same?

I tried to do as suggested here, modifying only three files. However, I did not need to modify the "corosync.conf", since the communication of the Cluster is already in the separate and correct subnet, that is, this subnet will not be changed.

So I applied the change only to the vmbr0 interface in /etc/interfaces and in the /hosts file and rebooted on all nodes. But something doesn't work right.

I can access the web administration interface, I can login via SSH normally on all nodes, separately. But when I try to open the Summary of a remote node on the Cluster Web administration screen, an icon awaiting information appears and gives an error message "no route to host (595)". This also occurs when I try to access the console of a remote node through the Shell in the Cluster web administration screen, it no longer opens. Opens only from the local node. When I try to open the remote node, it opens the black screen with the following message:

Code:
ssh: connect to host 192.168.0.37 port 22: No route to host

This (192.168.0.37) is the old address of the node. That is, it is still trying to connect with the old address. The new administration address for the node is 172.16.1.12 (the subnet 172.16.1.0/24 was created for cluster administration only).

Apparently, there is some place that has not been updated. The file /etc/corosync/corosync.conf remains with the old and correct address that is from another subnet, because nothing has changed there.

Code:
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: node1
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 10.10.10.11
  }
  node {
    name: node2
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 10.10.10.12
  }
  node {
    name: node3
    nodeid: 3
    quorum_votes: 1
    ring0_addr: 10.10.10.13
  }
  node {
    name: node5
    nodeid: 4
    quorum_votes: 1
    ring0_addr: 10.10.10.15
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: cluster2
  config_version: 4
  interface {
    linknumber: 0
  }
  ip_version: ipv4-6
  link_mode: passive
  secauth: on
  version: 2
}

Another strange but important behavior is that you can ping the IP address of the nodes, but by the name you cannot ping as shown below:

Code:
root@node1:~# ping node3
ping: node3: Temporary failure in name resolution
root@node1:~#

But the DNS configuration seems to be working, at least partially, as it is possible to ping addresses on the internet.

Searching with the find command, I realized that the database was not updated with some certificates, as some of the new IP addresses are not in the file /var/lib/pve-cluster/config.db.

Does anyone know how to fix?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!