Change Cluster Nodes IP Addresses

Due to corp IP changes, we may need to change the subnet of our Proxmox cluster (which also runs Ceph). The comments that I've seen in this thread pertain to changing IP addresses of a single (or few) nodes.

Is it any different if there is a need to change subnet and IP addresses of the whole cluster in one sitting?

Similarly, is it the same if we change the subnet mask, but not change the IP addresses? For example switch from a /24 subnet to a /23 subnet with no change of IP addresses (but possible change of gateway)?

Thanks!
 
Dear,

I need to modify the management IP of my Cluster nodes. I want to put them on another subnet.

The Cluster is already formed and Ceph is working on another subnet.

There is a physical network interface connected to the vmbr0 bridge in a subnet (192.168.0.0/24), where the virtual machines and cluster management worked

There is also another interface on another subnet (10.10.10.0/24) where the cluster and the Ceph public interface were installed. Finally, there is a third subnet (10.0.0.0/24) where Ceph travels data between OSDs.

Now, I need to change the node-only management address. I need you to stay on another subnet (172.16.1.0/24).

Does the procedure remain the same?

I tried to do as suggested here, modifying only three files. However, I did not need to modify the "corosync.conf", since the communication of the Cluster is already in the separate and correct subnet, that is, this subnet will not be changed.

So I applied the change only to the vmbr0 interface in /etc/interfaces and in the /hosts file and rebooted on all nodes. But something doesn't work right.

I can access the web administration interface, I can login via SSH normally on all nodes, separately. But when I try to open the Summary of a remote node on the Cluster Web administration screen, an icon awaiting information appears and gives an error message "no route to host (595)". This also occurs when I try to access the console of a remote node through the Shell in the Cluster web administration screen, it no longer opens. Opens only from the local node. When I try to open the remote node, it opens the black screen with the following message:

Code:
ssh: connect to host 192.168.0.37 port 22: No route to host

This (192.168.0.37) is the old address of the node. That is, it is still trying to connect with the old address. The new administration address for the node is 172.16.1.12 (the subnet 172.16.1.0/24 was created for cluster administration only).

Apparently, there is some place that has not been updated. The file /etc/corosync/corosync.conf remains with the old and correct address that is from another subnet, because nothing has changed there.

Code:
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: node1
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 10.10.10.11
  }
  node {
    name: node2
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 10.10.10.12
  }
  node {
    name: node3
    nodeid: 3
    quorum_votes: 1
    ring0_addr: 10.10.10.13
  }
  node {
    name: node5
    nodeid: 4
    quorum_votes: 1
    ring0_addr: 10.10.10.15
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: cluster2
  config_version: 4
  interface {
    linknumber: 0
  }
  ip_version: ipv4-6
  link_mode: passive
  secauth: on
  version: 2
}

Does anyone know how to fix?
 
Did this myself. Here is the answer to:

> So should I add corosync.conf to /etc/pve myself? Maybe following the example of MRosu's post on Mar 6, 2017?

This is the case if there is no cluster in proxmox. Check in the UI under Datacenter > Cluster. If this is empty, you only need to edit:

Code:
/etc/hosts
/etc/network/interfaces

..and reboot.
 
Hi,

no you have to change the ip in up to three files, dependence on your setup.
/etc/network/interfaces
/etc/hosts
/etc/pve/corosync.conf (only on one node necessary)

After you change them on both nodes, reboot both nodes.
Is it possible to apply IP changes to a node without restarting it?
Maybe restarting a service would help?
 
Last edited:
I had trouble following the recipe provided, even with a reboot. I always end up with a "split brain" scenario where I can connect to the host that I changed IP address, but corosync keeps that node out of quorum.

I've ended up re-installing the node from scratch (and moving VMs to another node).
 
Hi,

yes this are all files what you must change.


On each node


On one node in the cluster if the quorum is ok.


config_version should be increased.

i updated this on all servers

/etc/network/interfaces
/etc/hosts


I updated main cluster server 1

/etc/pve/corosync.conf


but config version should be increased. I don't understand this ?

totem {
cluster_name: UP-NET-TR
config_version: 23

right now i need to do this 24?
 
I edited the ip in the following files:
/etc/hosts
/etc/pve/corosync
/etc/network/interfaces

Then ran `ifreload -a` on the server, then `systemctl restart corosync.`

Update: You probably also need to remove all the old ssh keys in the other nodes.
 
Last edited:
Wait, i read the FAQ, changing IPs of the cluster isn't possible (or its hard way)?

When I try to make any changes to the file "/etc/pve/corosync.conf" it tells me that I only have read permissions. What I do?
Hi,

no you have to change the ip in up to three files, dependence on your setup.
/etc/network/interfaces
/etc/hosts
/etc/pve/corosync.conf (only on one node necessary)

After you change them on both nodes, reboot both nodes.
 
How did you guys change the /etc/pve/corosync? It won't let me do anything.........

-r--r----- 1 root www-data 448 Jan 8 16:24 corosync.conf

How do I manipulate this so I can change my ip addresses?
Keeps telling me I can't write to the file. Permission denied?
I tried to chmod -Rf 777 & I tried to chown -R nobody:nogroup so I could append to the file.

Can someone point me in the right direction, please do not give the dictionary response this is a very simple request, I just need the steps on how to do this. Trust me I understand this stuff.

Thanks guys,
Michael
 
Last edited:
but the file`/etc/pve/corosync.conf`
is read only, how could you modify it?
If your cluster is your personal home setup, or it does not use ZFS and can be put off-quorum, if the services can be put offline, then you can use my procedure.
I have migrated my LAN for example from 192.168.1.0 subnet to 192.168.2.0.
So, i have configured all the nodes with new IP addresses from the target subnet, but left 1 line with the old IP addresses in
/etc/network/interfaces

I have also modified:
/etc/hosts
/etc/resolv.conf

Then to apply new IPs (while still having the old ones) i did:
systemctl restart networking.service

Now all the nodes have 2 ip addresses and corosync/quorum still works on the old IPs. So, the quorum is OK, and you can modify the proxmox cluster files.
Then, i edit the following files and replace old subnet IPs with the new ones in each of them on one of the nodes. After the modification check that this change has been propagated to all nodes:

/etc/pve/corosync.conf (here increment version line! like config_version: 23)
/etc/pve/priv/known_hosts
/etc/pve/storage.cfg

Then on each node do
systemctl restart pve-cluster
systemctl restart corosync
pvecm status

The cluster will progressively get broken, and then progressively get reassembled again on new subnet. When you restart corosync on the last node, the quorum should be OK. You check it then on all other nodes with pvecm status. Now the cluster works on the new subnet IPs.
Then, when you are comfortable, you can remove old IP addresses for good from /etc/network/interfaces and apply the final change with
systemctl restart networking.service. Up to now no reboot was necessary.
But to test this new config, you can now reboot the nodes and check, if the new config is stable and working well.
 
Last edited: