Problem addin a new node to a cluster

Topoldo

New Member
Aug 31, 2018
11
0
1
64
Hi all.
I updated successfully 2 of my 3 nodes to proxmox 6 from 5.4
The third one had problems with the netadapter, so as I use it just for the quorum I decided to install proxmox 6 directly, after removing it from the cluster.
As the cluster has a link0 (the netwok cluster), as reported in the guide, I gave the command:

pvecm add <IP-ADDRESS-Of a NODE OF THE CLUSTER> -link0 LOCAL-IP-ADDRESS-LINK0
The LOCAL-IP-ADDRESS-LINK0 is 10.162.3.126

As soon as I perform this operation I have this info:

detected the following error(s):
* ring0: cannot use IP '10.162.3.126', it must be configured exactly once on local node!
TASK ERROR: Check if node may join a cluster failed!

This is the same both from GUI and CLI.
Please note that I set in the hosts file both that IP and their equivalent (10.162.3.127 and 10.162.3.128) of the two other servers.
And those 3 IP are reported in the /etc/Network/interfaces too.
Any hint, clue?
Tnx in advance

Topoldo
 

Stefan_R

Proxmox Staff Member
Staff member
Jun 4, 2019
123
22
18
Vienna
Can you post the output of 'ip a' and the contents of your /etc/network/interfaces of your new node (or from all of them)?
 

Topoldo

New Member
Aug 31, 2018
11
0
1
64
This is the ip a:

Code:
root@tilt:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether 00:9c:02:a4:6f:7e brd ff:ff:ff:ff:ff:ff
3: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:9c:02:a4:6f:7f brd ff:ff:ff:ff:ff:ff
    inet6 fe80::29c:2ff:fea4:6f7f/64 scope link
       valid_lft forever preferred_lft forever
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:9c:02:a4:6f:7e brd ff:ff:ff:ff:ff:ff
    inet 147.162.3.126/24 brd 147.162.3.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::29c:2ff:fea4:6f7e/64 scope link
       valid_lft forever preferred_lft forever

And this is the content of /etc/network/interfaces of the "new" server:
Code:
root@tilt:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface eno0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 147.162.3.126
        netmask 255.255.255.0
        gateway 147.162.3.254
        bridge_ports eno0
        bridge_stp off
        bridge_fd 0

# Cluster network (corosync)
# --------------------------
auto eno1
iface eno1 inet manual
        address 10.162.3.126
        netmask 255.255.255.0
and this is the content file /etc/hosts which is replicated into the 2 servers in cluster and in the new server:


root@tilt:~# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
147.162.3.126 tilt.bio.unipd.it tilt
147.162.3.21 bkpsrv.bio.unipd.it bkpsrv

# corosync network hosts
10.162.3.126 coro-tilt.proxmox.com coro-tilt
10.162.3.127 coro-prox.proxmox.com coro-prox
10.162.3.128 coro-mox.proxmox.com coro-mox



# The following lines are desirable for IPv6 capable hosts

::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts


Thanks,
Topoldo
 

Topoldo

New Member
Aug 31, 2018
11
0
1
64
SOLVED!

In the /etc/network/interfaces I wrote:

Code:
...

# Cluster network (corosync)
# --------------------------
auto eno1
iface eno1 inet manual
        address 10.162.3.126
        netmask 255.255.255.0
On the contrary the correct syntax is:

Code:
...

# Cluster network (corosync)
# --------------------------
auto eno1
iface eno1 inet static
        address 10.162.3.126
        netmask 255.255.255.0
thanks for having pointing me to the correct solution!
Topoldo
 

zacaro

New Member
Aug 26, 2019
4
0
1
36
Any advice on how to work around the same issue as yours but using a non configurable adapter via /etc/network/interfaces? I'm setting this cluster via OpenVPN.

sandyhook:/etc/pve/priv# pvecm add wacco
detected the following error(s):
* local node address: cannot use IP '10.9.0.115', it must be configured exactly once on local node!

Check if node may join a cluster failed!


The weird thing is that my cluster with 3 nodes using openvpn didn't failed, it is just this new one.


sandyhook:/etc# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
link/ether a4:bf:01:22:8c:06 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether a4:bf:01:22:8c:07 brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether a4:bf:01:22:8c:06 brd ff:ff:ff:ff:ff:ff
inet x.x.x.x/24 brd x.x.x.x scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::a6bf:1ff:fe22:8c06/64 scope link
valid_lft forever preferred_lft forever
5: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether 4e:4a:4c:fc:a9:fd brd ff:ff:ff:ff:ff:ff
inet 10.0.4.5/24 brd 10.0.4.255 scope global vmbr1
valid_lft forever preferred_lft forever
inet6 fe80::4c4a:4cff:fefc:a9fd/64 scope link
valid_lft forever preferred_lft forever
6: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast state UNKNOWN group default qlen 100
link/none
inet 10.9.0.115 peer 255.255.255.0/32 scope global tun0
valid_lft forever preferred_lft forever
inet6 fe80::5d78:c58c:5c3:2374/64 scope link stable-privacy
valid_lft forever preferred_lft forever


It doesn't matter if I setup the /etc/hosts or if I resolve the hostname via the local dns server. The result is the same.
 
Last edited:

zacaro

New Member
Aug 26, 2019
4
0
1
36
Well, just dropping this here in order to help furthers.

My workaround for this was to edit /usr/share/perl5/PVE/Cluster.pm and commented out from 1811 to 1838. Then add your node to the cluster and revert the file back to the copy you SHOULD do before the edition. Problem solved.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!