Change Cluster Nodes IP Addresses

Due to corp IP changes, we may need to change the subnet of our Proxmox cluster (which also runs Ceph). The comments that I've seen in this thread pertain to changing IP addresses of a single (or few) nodes.

Is it any different if there is a need to change subnet and IP addresses of the whole cluster in one sitting?

Similarly, is it the same if we change the subnet mask, but not change the IP addresses? For example switch from a /24 subnet to a /23 subnet with no change of IP addresses (but possible change of gateway)?

Thanks!
 
Dear,

I need to modify the management IP of my Cluster nodes. I want to put them on another subnet.

The Cluster is already formed and Ceph is working on another subnet.

There is a physical network interface connected to the vmbr0 bridge in a subnet (192.168.0.0/24), where the virtual machines and cluster management worked

There is also another interface on another subnet (10.10.10.0/24) where the cluster and the Ceph public interface were installed. Finally, there is a third subnet (10.0.0.0/24) where Ceph travels data between OSDs.

Now, I need to change the node-only management address. I need you to stay on another subnet (172.16.1.0/24).

Does the procedure remain the same?

I tried to do as suggested here, modifying only three files. However, I did not need to modify the "corosync.conf", since the communication of the Cluster is already in the separate and correct subnet, that is, this subnet will not be changed.

So I applied the change only to the vmbr0 interface in /etc/interfaces and in the /hosts file and rebooted on all nodes. But something doesn't work right.

I can access the web administration interface, I can login via SSH normally on all nodes, separately. But when I try to open the Summary of a remote node on the Cluster Web administration screen, an icon awaiting information appears and gives an error message "no route to host (595)". This also occurs when I try to access the console of a remote node through the Shell in the Cluster web administration screen, it no longer opens. Opens only from the local node. When I try to open the remote node, it opens the black screen with the following message:

Code:
ssh: connect to host 192.168.0.37 port 22: No route to host

This (192.168.0.37) is the old address of the node. That is, it is still trying to connect with the old address. The new administration address for the node is 172.16.1.12 (the subnet 172.16.1.0/24 was created for cluster administration only).

Apparently, there is some place that has not been updated. The file /etc/corosync/corosync.conf remains with the old and correct address that is from another subnet, because nothing has changed there.

Code:
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: node1
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 10.10.10.11
  }
  node {
    name: node2
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 10.10.10.12
  }
  node {
    name: node3
    nodeid: 3
    quorum_votes: 1
    ring0_addr: 10.10.10.13
  }
  node {
    name: node5
    nodeid: 4
    quorum_votes: 1
    ring0_addr: 10.10.10.15
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: cluster2
  config_version: 4
  interface {
    linknumber: 0
  }
  ip_version: ipv4-6
  link_mode: passive
  secauth: on
  version: 2
}

Does anyone know how to fix?
 
Did this myself. Here is the answer to:

> So should I add corosync.conf to /etc/pve myself? Maybe following the example of MRosu's post on Mar 6, 2017?

This is the case if there is no cluster in proxmox. Check in the UI under Datacenter > Cluster. If this is empty, you only need to edit:

Code:
/etc/hosts
/etc/network/interfaces

..and reboot.
 
Hi,

no you have to change the ip in up to three files, dependence on your setup.
/etc/network/interfaces
/etc/hosts
/etc/pve/corosync.conf (only on one node necessary)

After you change them on both nodes, reboot both nodes.
Is it possible to apply IP changes to a node without restarting it?
Maybe restarting a service would help?
 
Last edited:
I had trouble following the recipe provided, even with a reboot. I always end up with a "split brain" scenario where I can connect to the host that I changed IP address, but corosync keeps that node out of quorum.

I've ended up re-installing the node from scratch (and moving VMs to another node).
 
Hi,

yes this are all files what you must change.


On each node


On one node in the cluster if the quorum is ok.


config_version should be increased.

i updated this on all servers

/etc/network/interfaces
/etc/hosts


I updated main cluster server 1

/etc/pve/corosync.conf


but config version should be increased. I don't understand this ?

totem {
cluster_name: UP-NET-TR
config_version: 23

right now i need to do this 24?
 
I edited the ip in the following files:
/etc/hosts
/etc/pve/corosync
/etc/network/interfaces

Then ran `ifreload -a` on the server, then `systemctl restart corosync.`

Update: You probably also need to remove all the old ssh keys in the other nodes.
 
Last edited:
Wait, i read the FAQ, changing IPs of the cluster isn't possible (or its hard way)?

When I try to make any changes to the file "/etc/pve/corosync.conf" it tells me that I only have read permissions. What I do?
Hi,

no you have to change the ip in up to three files, dependence on your setup.
/etc/network/interfaces
/etc/hosts
/etc/pve/corosync.conf (only on one node necessary)

After you change them on both nodes, reboot both nodes.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!