Join Information at adding Node to Cluster is grey

rhagemann

New Member
Aug 11, 2023
10
0
1
Hi,

I removed a node from a cluster with two nodes. Now there is only one node in my cluster.
At removing my node i got this message:
Code:
cluster not ready - no quorum? (500)

Now there are in /etc/pve/nodes/ two nodes including my node which i removed.
Is this normal?

I want to add a new node to my cluster but my "Join Information" Button is grey. I cannot click on it.
In CLI i get this:
Code:
pvesh get /cluster/config/join
unable to read certificate from '/etc/pve/nodes/node02/pve-ssl.pem'

Can I remove the already deleted Node in /etc/pve/nodes/?
Or what is the best way to add a new node to my cluster?

Is it possible to add a node with same hostname and IP to my cluster which I removed before?

Greetings,
Robert
 
To add a node that already exists to a cluster you can use the --force flag
Code:
pvecm add <hostname> --force
 
To add a node that already exists to a cluster you can use the --force flag


Code:

pvecm add <hostname> --force

Do I have to add each node to the cluster like this?
 
You only need --force when re-adding a node that was previously removed.
 
Thank you.

But what I want to add an other node.
My "Join Information" Button is grey.
What can I do to change this?
 
Thank you.

But what I want to add an other node.
My "Join Information" Button is grey.
What can I do to change this?
Is your cluster quorate again? Please provide the output of pvecm status. If your cluster is not quorate you will have to fix that first before joining new nodes to the cluster.
 
That is my status.

Code:
Cluster information
-------------------
Name:             dmz01
Config Version:   2
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Mon Aug 21 08:56:28 2023
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000001
Ring ID:          1.2c
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   1
Highest expected: 1
Total votes:      1
Quorum:           1
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 192.168.212.65 (local)

Is
Code:
Flags:            Quorate
what you described?
 
Are you connected to the host which is part of the cluster or the node you want to join to the cluster? Only the node which is already part of the cluster can provide the Join Information, which you then use on the node you want to join to the cluster.
 
I am connected to the Node which is part of the cluster.
Curios is Create Cluster and Join Cluster are not grey.
In the Cluster Node list there are two nodes.
But you can read "Standalone node - no cluster defined"
 
If that is the case, than you messed up the cluster state somewhere. From which host was the pvecm status you provided?

Please post from both nodes:
Code:
systemctl status pve-cluster.service corosync.service
cat /etc/pve/corosync.conf
cat /etc/corosync/corosync.conf
cat /etc/network/interfaces
cat /etc/hosts
ls /etc/pve/qemu-server
 
Code:
root@node01:~# systemctl status pve-cluster.service corosync.service
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2023-08-11 11:42:35 CEST; 1 weeks 3 days ago
Process: 22511 ExecStart=/usr/bin/pmxcfs (code=exited, status=0/SUCCESS)
Main PID: 22519 (pmxcfs)
Tasks: 13 (limit: 4915)
Memory: 50.4M
CGroup: /system.slice/pve-cluster.service
└─22519 /usr/bin/pmxcfs

Aug 21 23:42:34 node01 pmxcfs[22519]: [dcdb] notice: data verification successful
Aug 22 00:42:34 node01 pmxcfs[22519]: [dcdb] notice: data verification successful
Aug 22 01:42:34 node01 pmxcfs[22519]: [dcdb] notice: data verification successful
Aug 22 02:42:34 node01 pmxcfs[22519]: [dcdb] notice: data verification successful
Aug 22 03:42:34 node01 pmxcfs[22519]: [dcdb] notice: data verification successful
Aug 22 04:42:34 node01 pmxcfs[22519]: [dcdb] notice: data verification successful
Aug 22 05:42:34 node01 pmxcfs[22519]: [dcdb] notice: data verification successful
Aug 22 06:42:34 node01 pmxcfs[22519]: [dcdb] notice: data verification successful
Aug 22 07:42:34 node01 pmxcfs[22519]: [dcdb] notice: data verification successful
Aug 22 08:42:34 node01 pmxcfs[22519]: [dcdb] notice: data verification successful

● corosync.service - Corosync Cluster Engine
Loaded: loaded (/lib/systemd/system/corosync.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2023-08-11 11:42:36 CEST; 1 weeks 3 days ago
Docs: man:corosync
man:corosync.conf
man:corosync_overview
Main PID: 22524 (corosync)
Tasks: 9 (limit: 4915)
Memory: 147.6M
CGroup: /system.slice/corosync.service
└─22524 /usr/sbin/corosync -f

Aug 15 15:37:35 node01 corosync[22524]:   [QUORUM] Members[1]: 1
Aug 15 15:37:35 node01 corosync[22524]:   [MAIN  ] Completed service synchronization, ready to provide serv
Aug 15 15:37:36 node01 corosync[22524]:   [KNET  ] link: host: 2 link: 0 is down
Aug 15 15:37:36 node01 corosync[22524]:   [KNET  ] link: host: 2 link: 1 is down
Aug 15 15:37:36 node01 corosync[22524]:   [KNET  ] host: host: 2 (passive) best link: 0 (pri: 1)
Aug 15 15:37:36 node01 corosync[22524]:   [KNET  ] host: host: 2 has no active links
Aug 15 15:37:36 node01 corosync[22524]:   [KNET  ] host: host: 2 (passive) best link: 0 (pri: 1)
Aug 15 15:37:36 node01 corosync[22524]:   [KNET  ] host: host: 2 has no active links
Aug 15 16:20:45 node01 corosync[22524]:   [QUORUM] This node is within the primary component and will provi
Aug 15 16:20:45 node01 corosync[22524]:   [QUORUM] Members[1]: 1





Code:
root@node01:~# cat /etc/pve/corosync.conf
logging {
debug: off
to_syslog: yes
}

nodelist {
node {
name: node02
nodeid: 2
quorum_votes: 1
ring0_addr: 192.168.212.79
ring1_addr: 194.8.212.79
}
node {
name: node01
nodeid: 1
quorum_votes: 1
ring0_addr: 192.168.212.65
ring1_addr: 194.8.212.65
}
}

quorum {
provider: corosync_votequorum
}

totem {
cluster_name: dmz01
config_version: 2
interface {
linknumber: 0
}
interface {
linknumber: 1
}
ip_version: ipv4-6
link_mode: passive
secauth: on
version: 2
}





Code:
root@node01:~# cat /etc/corosync/corosync.conf
logging {
debug: off
to_syslog: yes
}

nodelist {
node {
name: node02
nodeid: 2
quorum_votes: 1
ring0_addr: 192.168.212.79
ring1_addr: 194.8.212.79
}
node {
name: node01
nodeid: 1
quorum_votes: 1
ring0_addr: 192.168.212.65
ring1_addr: 194.8.212.65
}
}

quorum {
provider: corosync_votequorum
}

totem {
cluster_name: dmz01
config_version: 2
interface {
linknumber: 0
}
interface {
linknumber: 1
}
ip_version: ipv4-6
link_mode: passive
secauth: on
version: 2
}





Code:
root@node01:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto vmbr0
iface vmbr0 inet static
address 194.8.212.65
netmask 255.255.255.0
gateway 194.8.212.8
bridge_ports eth0
bridge_stp off
bridge_fd 0


allow-hotplug eth1
iface eth1 inet static
address 192.168.212.65/24
gateway 0.0.0.0




Code:
root@node01:~# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
194.8.212.65 node01.dimedis.de node01 pvelocalhost
194.8.212.79 node02.dimedis.de node02

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts



Code:
root@node01:~# ls /etc/pve/qemu-server
100.conf  102.conf  104.conf  106.conf  108.conf  110.conf  114.conf  127.conf
101.conf  103.conf  105.conf  107.conf  109.conf  113.conf  115.conf






Code:
root@node02:~# systemctl status pve-cluster.service corosync.service
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; preset: enabled)
Active: active (running) since Tue 2023-08-15 16:08:17 CEST; 6 days ago
Process: 1352 ExecStart=/usr/bin/pmxcfs (code=exited, status=0/SUCCESS)
Main PID: 1362 (pmxcfs)
Tasks: 7 (limit: 309282)
Memory: 63.0M
CPU: 4min 37.568s
CGroup: /system.slice/pve-cluster.service
└─1362 /usr/bin/pmxcfs

Aug 15 16:08:16 node02 systemd[1]: Starting pve-cluster.service - The Proxmox VE cluster filesystem...
Aug 15 16:08:16 node02 pmxcfs[1352]: [main] notice: resolved node name 'node02' to '194.8.212.79' for default>
Aug 15 16:08:16 node02 pmxcfs[1352]: [main] notice: resolved node name 'node02' to '194.8.212.79' for default>
Aug 15 16:08:17 node02 systemd[1]: Started pve-cluster.service - The Proxmox VE cluster filesystem.

○ corosync.service - Corosync Cluster Engine
Loaded: loaded (/lib/systemd/system/corosync.service; enabled; preset: enabled)
Active: inactive (dead)
Condition: start condition failed at Tue 2023-08-15 16:08:17 CEST; 6 days ago
└─ ConditionPathExists=/etc/corosync/corosync.conf was not met
Docs: man:corosync
man:corosync.conf
man:corosync_overview

Aug 15 16:08:17 node02 systemd[1]: corosync.service - Corosync Cluster Engine was skipped because of an unmet


Code:
root@node02:~# cat /etc/pve/corosync.conf
cat: /etc/pve/corosync.conf: No such file or directory

Code:
root@node02:~# cat /etc/corosync/corosync.conf
cat: /etc/corosync/corosync.conf: No such file or directory



Code:
root@node02:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface enp1s0f0 inet manual

auto vmbr0
iface vmbr0 inet static
address 194.8.212.79/24
gateway 194.8.212.8
bridge-ports eth0
bridge-stp off
bridge-fd 0

iface enp1s0f1 inet manual

iface enp2s0f0 inet manual

iface enp2s0f1 inet manual




Code:
root@node02:~# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
194.8.212.79 node02.dimedis.de node02
194.8.212.65 node01.dimedis.de node01

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

Code:
root@node02:~# ls /etc/pve/qemu-server
 
Okay, so apart from node02 still showing up in the corosync config, the rest looks good. Make sure to stop running guests and create backup.
Then, please try to run
Code:
pvecm delnode node02
pvecm status
on node01 and post the output.

If that does not remove the lingering node in the corosync config, you will have to follow the procedure as described here [0] before trying to rejoin the node to the cluster.

[0] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pvecm_separate_node_without_reinstall
 
Last edited:
Code:
root@node01:~# pvecm status
Cluster information
-------------------
Name:             dmz01
Config Version:   3
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Tue Aug 22 16:26:45 2023
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000001
Ring ID:          1.2c
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   1
Highest expected: 1
Total votes:      1
Quorum:           1
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 192.168.212.65 (local)

Nice. Now i can view Join Information.
Thank you!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!