Change Cluster Nodes IP Addresses

MRosu

Renowned Member
Mar 27, 2016
29
3
68
38
I'd like to change the IP addresses of our two notes (2 node cluster).

I wanna make sure I'm not missing anything critical.

Is it as simple as changing each node's Network Settings via the GUI and rebooting?
 
Hi,

no you have to change the ip in up to three files, dependence on your setup.
/etc/network/interfaces
/etc/hosts
/etc/pve/corosync.conf (only on one node necessary)

After you change them on both nodes, reboot both nodes.
 
Hi,

no you have to change the ip in up to three files, dependence on your setup.
/etc/network/interfaces
/etc/hosts
/etc/pve/corosync.conf (only on one node necessary)

After you change them on both nodes, reboot both nodes.

Wait, i read the FAQ, changing IPs of the cluster isn't possible (or its hard way)?
 
Hey wolfgang, thank you so much for the assistance.

I do have one question.

When I check the corosync.conf file, I notice an IP address which I have not assigned myself, from what I recall:

totem {
cluster_name: Cluster1
config_version: 6
ip_version: ipv4
secauth: on
version: 2
interface {
bindnetaddr: 10.1.0.18
ringnumber: 0
}


I did not know this 10.1.0.18 address was being used (its not in my documentation), so I could have potentially assigned this address for another device.

I assume this was done automatically. If that is the case, I would just manually set this address myself, document it, and make sure nothing else takes it?

Am I correct?
 
Wait, i read the FAQ, changing IPs of the cluster isn't possible (or its hard way)?
Where did you read that?
You can change it but this is advanced topic.
 
Where did you read that?
You can change it but this is advanced topic.

https://pve.proxmox.com/wiki/Cluster_Manager
Preparing Nodes
First, install Proxmox VE on all nodes. Make sure that each node is installed with the final hostname and IP configuration. Changing the hostname and IP is not possible after cluster creation.

If it's only change in 3 files (+ssh fingerprints confirmation), it's almost easy and not "not possible".
 
Thanks for all the help so far.

I'm planning on changing the iSCSI host's IP as well.

Is the recommended action to change the IP address in /etc/pve/storage.cfg?

I also saw this thread with someone who successfully removed and re-added the iSCSI target and LVM group via GUI.
 
Hello all,

I need to "hijack" this thread as I need to change my current 3 node clusters ip addresses, too.

Is it really sufficient to update the ips in these files only:

/etc/network/interfaces
/etc/hosts
/etc/pve/corosync.conf​

Do I have to "update" the version in the totem section?
Is corosync.conf synced to the other nodes or do I have to edit this individually on each host?
Anything else to consider here?

Best,
Sebastian
 
Hi All. I just discovered proxmox this week, and it looks wonderful! Thanks to all who made and maintain it!

May I follow up in this thread with my own dilemma as Sebastian did? Or is that considered poor form in this community?

I have the exact same question as MRosu except that I have only one node: how to effectively change my single node's IP address?

I changed the ip address in /etc/hosts and /etc/network/interfaces as specified above and rebooted, but after the reboot, the node still thought that it's IP address was the originally configured one. Also thought I could get away with changing the hostname in /etc/hosts and did so, but that change does not show up in the command prompt (as indicated below). Then I did a

root@pve:~# service networking restart

and immediately after the service restart, the node was indeed reachable on the new network from a remote host with ssh. However, there is now no service running on port 8006 (even after rebooting) and now when the node boots up, the usual messages I see in the console about connecting to https://IPADDR:8006/ are no longer present.

It looks like by changing the node's ip address as indicated above, I somehow broke the PVE services running on port 8006?

As for the third file mentioned above (/etc/pve/corosync.conf), my node has no files in /etc/pve/ at all:

root@pve:~#: ls -la /etc/pve
drwxr-xr-x 2 root root 2 Jan 4 14:09 .
drwxr-xr-x 2 root root 180 Jan 4 13:56 ..

So should I add corosync.conf to /etc/pve myself? Maybe following the example of MRosu's post on Mar 6, 2017?

When I restored /etc/hosts and /etc/network/interfaces files to their original states and rebooted, my node is again behaving entirely as expected (and now there are files in /etc/pve like authkey.pub, datacenter.cfg, ..., user.cfg, vzdump.cron), but I'd like to change its ip address. Wonder what I'm doing wrong...

Thanks for any suggestions.

Best,
Kevin
 
Kevin, welcome!

Did you edit all three files:

/etc/network/interfaces
/etc/hosts
/etc/pve/corosync.conf

and updated the IP in there AND (important!) updated the version in /etc/pve/corosync.conf?

Then do a reboot.
 
Thanks for the warm welcome, Sebastian, and for your quick reply.

But I have no cluster cfg file in etc pve (or anywhere in the directory tree rooted at etc for that matter).

And I have no etc pve corosync.conf file.

I have etc default corosync and etc init corosync conf and an empty directory etc corosync uidgid d but nothing like corosync in etc pve.

My etc pve looks like this:

root@pve:~# ls -la etc pve
total 17
drwxr-xr-x 2 root www-data 0 Dec 31 1969 .
drwxr-xr-x 92 root root 180 Jan 4 14:55 ..
-rw-r----- 1 root www-data 451 Dec 31 08:01 authkey pub
-r--r----- 1 root www-data 8026 Dec 31 1969 .clusterlog
-rw-r----- 1 root www-data 16 Dec 31 07:58 datacenter cfg
-rw-r----- 1 root www-data 2 Dec 31 1969 .debug
lrwxr-xr-x 1 root www-data 0 Dec 31 1969 local -> nodes/pve
lrwxr-xr-x 1 root www-data 0 Dec 31 1969 lxc -> nodes/pve/lxc
-r--r----- 1 root www-data 36 Dec 31 1969 .members
drwxr-xr-x 2 root www-data 0 Dec 31 08:01 nodes
lrwxr-xr-x 1 root www-data 0 Dec 31 1969 openvz -> nodes/pve/openvz
drwx------ 2 root www-data 0 Dec 31 08:01 priv
-rw-r----- 1 root www-data 2057 Dec 31 08:01 pve-root-ca pem
-rw-r----- 1 root www-data 1675 Dec 31 08:01 pve-www key
lrwxr-xr-x 1 root www-data 0 Dec 31 1969 qemu-server -> nodes/pve/qemu-server
-r--r----- 1 root www-data 383 Dec 31 1969 .rrd
-rw-r----- 1 root www-data 125 Dec 31 07:58 storage cfg
-rw-r----- 1 root www-data 41 Dec 31 07:58 user cfg
-r--r----- 1 root www-data 401 Dec 31 1969 .version
-r--r----- 1 root www-data 84 Dec 31 1969 .vmlist
-rw-r----- 1 root www-data 119 Dec 31 08:01 vzdump cron
root@pve:~#

So I'm still not sure how to change my node's ip address. :(

(I had to remove the all slants and dots in my pathnames because the forum software seems to think they are part of external URLs and I'm a new user, so it won't let me post URLs in case I'm a spammer).
[The following error occurred:
New users aren't allowed to post external links, for anti-spam measurement reasons.]
 
Sorry for bumping old thread

I have a cluster of 2 nodes.
The first/master node's IP address is changed, so I am trying to update 3 files as per @wolfgang guidance:
1. /etc/network/interfaces
2. /etc/hosts
3. /etc/pve/corosync.conf (only on one node necessary)

Unfortunately, I cannot edit any file under /etc/pve folder, because all files under /etc/pve is locked (readonly)
Can anyone guide me on how I can update /etc/pvecorosync.conf ?
 
Hello,

I had the same problem like you.

You must make upgrade proxmox version.

If you dont have enterprise version, you must change your source list. ( /etc/apt/sources.list.d/pve-enterprise.list .)
and change the line to http://download.proxmox.com/debian/pve stretch pve-no-subscription
(more information https://pve.proxmox.com/wiki/Package_Repositories)
After it will be done, upgrade your system.

I have now version 5-2-6 and now i can eddit the files and change ip of /etc/pve/corosync.con. (after change your VM will work)
 
Hello Community,

I also need some help on this topic. Have a 4 node Proxmox 5.3 cluster running.

Initially all 4 nodes had only one NIC each, default installation, with bridge vmbr0 192.168.1.0/24, so i defined in /etc/hosts IPs 192.168.1.11-14 h01-04.company.tld and created the cluster with those hostnames.
Later i added to each node a dual 10GbE NIC and some HDDs and created a Ceph cluster over 10GbE (port 1 of each NIC) with IPv6 fd00::0/64 (no vmbr, directly on the ens3f0 interface, Ceph is running perfectly)
Now i got more sfp+ cables and would like to move corosync from 192.168.1.0/24 to the second 10GbE port of each node (ideally also w/o bridge). if possible also IPv6 fd01::0/64
But here is the problem, the hostnames h01-04.company.tld must stay on the original NIC with 192.168.1.0/24 network (so i can't change them in /etc/hosts), because this stay as management network and some VMs/CTs will need to stay there too. Can i simply change the IPs in /etc/pve/corosync.conf (ring0_addr and bindnetaddr) and reboot? Or do i need to make more changes? /etc/network/interfaces is already fine.
Thanks in advance!

here is my corosync.conf

Code:
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: h01
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 192.168.1.11
  }
  node {
    name: h02
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 192.168.1.12
  }
  node {
    name: h03
    nodeid: 3
    quorum_votes: 1
    ring0_addr: 192.168.1.13
  }
  node {
    name: h04
    nodeid: 4
    quorum_votes: 1
    ring0_addr: 192.168.1.14
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: aitcluster01
  config_version: 4
  interface {
    bindnetaddr: 192.168.1.11
    ringnumber: 0
  }
  ip_version: ipv4
  secauth: on
  version: 2
}
 
Last edited:
Ok, after rereading the "Separate Cluster Network" wiki page, i understood, that ring0_addr in corosync.conf is the important key for my situation. If ring0_addr is defined as IP-address, then no changes to /etc/hosts needed, right?

if so, left open question is, can i use IPv6 at ring0_addr / bindnetaddr?
Thanks again.
 
Last edited:
Tested :
on each node one by one :
1. Change IP assigned to vmbr0, add network settings for the new interface so cluster can communicate.
2. Edit /etc/pve/corosync.conf with the new ip, and increment the config_version setting
3. edit the /etc/hosts with the new ip value
4. Reboot the node
5. wait for the node to reboot and perform next node, until end of nodes.

and working
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!