Change Cluster Nodes IP Addresses


Oct 28, 2019
Austin, TX
Due to corp IP changes, we may need to change the subnet of our Proxmox cluster (which also runs Ceph). The comments that I've seen in this thread pertain to changing IP addresses of a single (or few) nodes.

Is it any different if there is a need to change subnet and IP addresses of the whole cluster in one sitting?

Similarly, is it the same if we change the subnet mask, but not change the IP addresses? For example switch from a /24 subnet to a /23 subnet with no change of IP addresses (but possible change of gateway)?



New Member
Sep 14, 2020

I need to modify the management IP of my Cluster nodes. I want to put them on another subnet.

The Cluster is already formed and Ceph is working on another subnet.

There is a physical network interface connected to the vmbr0 bridge in a subnet (, where the virtual machines and cluster management worked

There is also another interface on another subnet ( where the cluster and the Ceph public interface were installed. Finally, there is a third subnet ( where Ceph travels data between OSDs.

Now, I need to change the node-only management address. I need you to stay on another subnet (

Does the procedure remain the same?

I tried to do as suggested here, modifying only three files. However, I did not need to modify the "corosync.conf", since the communication of the Cluster is already in the separate and correct subnet, that is, this subnet will not be changed.

So I applied the change only to the vmbr0 interface in /etc/interfaces and in the /hosts file and rebooted on all nodes. But something doesn't work right.

I can access the web administration interface, I can login via SSH normally on all nodes, separately. But when I try to open the Summary of a remote node on the Cluster Web administration screen, an icon awaiting information appears and gives an error message "no route to host (595)". This also occurs when I try to access the console of a remote node through the Shell in the Cluster web administration screen, it no longer opens. Opens only from the local node. When I try to open the remote node, it opens the black screen with the following message:

ssh: connect to host port 22: No route to host

This ( is the old address of the node. That is, it is still trying to connect with the old address. The new administration address for the node is (the subnet was created for cluster administration only).

Apparently, there is some place that has not been updated. The file /etc/corosync/corosync.conf remains with the old and correct address that is from another subnet, because nothing has changed there.

logging {
  debug: off
  to_syslog: yes

nodelist {
  node {
    name: node1
    nodeid: 1
    quorum_votes: 1
  node {
    name: node2
    nodeid: 2
    quorum_votes: 1
  node {
    name: node3
    nodeid: 3
    quorum_votes: 1
  node {
    name: node5
    nodeid: 4
    quorum_votes: 1

quorum {
  provider: corosync_votequorum

totem {
  cluster_name: cluster2
  config_version: 4
  interface {
    linknumber: 0
  ip_version: ipv4-6
  link_mode: passive
  secauth: on
  version: 2

Does anyone know how to fix?
Dec 2, 2020
Did this myself. Here is the answer to:

> So should I add corosync.conf to /etc/pve myself? Maybe following the example of MRosu's post on Mar 6, 2017?

This is the case if there is no cluster in proxmox. Check in the UI under Datacenter > Cluster. If this is empty, you only need to edit:


..and reboot.


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!