[SOLVED] Node not showing on "pvecm nodes"

subjectx

Member
Nov 4, 2020
36
3
8
112
Greetings,

So, I decided, to put two nodes into cluster.

Create cluster on node1, join cluster on node 2.

Datacenter on node 1 shows two nodes in Cluster Nodes, although it says Standalone node - no cluster defined above.
Screenshot_1.png

Command pvecm nodes on node1 only lists one node:
root@ark:~# pvecm nodes

Membership information
----------------------
Nodeid Votes Name
1 1 ark (local)


Tried with restarting pve-cluster service, no change.

I wanted to re-add node, so on nod2 I did

systemctl stop pve-cluster
systemctl stop corosync

pmxcfs -l

rm /etc/pve/corosync.conf
rm -r /etc/corosync/*

killall pmxcfs
systemctl start pve-cluster

expecting to remove node2 from this list, but its still here.

Only one command is not done, pvecm delnode 2, since I cannot list it with "pvecm nodes".

"pvecm status" returns

root@ark:~# pvecm status
Cluster information
-------------------
Name: CDiUL
Config Version: 2
Transport: knet
Secure auth: on

Quorum information
------------------
Date: Tue Jan 26 16:11:05 2021
Quorum provider: corosync_votequorum
Nodes: 1
Node ID: 0x00000001
Ring ID: 1.5
Quorate: No

Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 1
Quorum: 2 Activity blocked
Flags:

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 192.168.4.5 (local)

Please advise me how to remove node2 from cluster on node1 and then remove whole cluster from node1.
node2 is empty, freshly installed proxmox, so no data there, while node1 is production for some VMs..

Thank you.
 
Hi,
Only one command is not done, pvecm delnode 2, since I cannot list it with "pvecm nodes".
On node 1, you should be able to execute this command after setting the number of expected votes to 1 with pvecm expected 1. If that doesn't work, how does the /etc/pve/corosync.conf look like?

Be aware that you need to reinstall node 2 if you ever want it to join this cluster again.
I'm not sure there's a good way to completely get rid of the cluster once it's been initialized. I'd suggest to keep node 1 around as a single-node "cluster".
 
  • Like
Reactions: subjectx
Notice: I dont know how to use syntax for code highlightning..

It is interesting since pvecm nodes shows this:

root@ark:~# pvecm nodes Membership information ---------------------- Nodeid Votes Name 1 1 ark (local)

While root@ark:~# nano /etc/pve/corosync.conf shows this:
nodelist { node { name: ark nodeid: 1 quorum_votes: 1 ring0_addr: 192.168.4.5 ring1_addr: 192.168.4.6 ring2_addr: 212.235.181.28 } node { name: backup nodeid: 2 quorum_votes: 1 ring0_addr: 192.168.4.3 ring1_addr: 192.168.4.4 ring2_addr: 212.235.181.29 } }

I did pvecm expected 1.

On pvecm delnode 2 returns
error during cfs-locked 'file-corosync_conf' operation: Node/IP: 2 is not a known host of the cluster.
 
Last edited:
For inputting a block of code, you can use [CODE]your text here[/CODE].

pvecm nodes/status only show the active nodes IIRC.

For the deletion, it has to be pvecm delnode <name of the node>, not just the number, see man pvecm.
 
Ahh, I see, sorry for my mistake.

I have managed to delete it now.

I will probably reinstall node2 from scratch and try to re-add it again, hoping it doesnt error again..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!