Separate a Node Without Reinstalling

Talha

Member
Jan 13, 2020
55
0
11
28
Hi, I tried to Separate a node from the cluster without re-installation, but when I enter Proxmox administration, the name of the node is still there, but there is no green icon and not online.

Output from the node that needs to be separated:
Code:
root@prx-7:~# pvecm nodes
Error: Corosync config '/etc/pve/corosync.conf' does not exist - is this node part of a cluster?
 
Did you run pvecm delnode oldnode on another node in the cluster or on the node that should be separated? It needs to be run on a node that should still stay in the cluster afterwards.

What is the output of pvecm status on a node that is still in the cluster?
 
Do you still have configurations under /etc/pve/nodes/<removed-node>/ on your other nodes?
If so, remove those after making sure you no longer need them. Once removed, you should no longer see the node in the GUI.
 
root@prx-7:~# pvecm status
Error: Corosync config '/etc/pve/corosync.conf' does not exist - is this node part of a cluster?
As far as I understand this is the node that got separated, please run it on a node that is still part of the cluster.

Please also try the advice posted by mira above first, since this might already resolve your problem.
 
Do you still have configurations under /etc/pve/nodes/<removed-node>/ on your other nodes?
If so, remove those after making sure you no longer need them. Once removed, you should no longer see the node in the GUI.
When i remove /etc/pve/nodes/oldnode, problem solved on cluster. However, the problem persists on the server that has been removed from the cluster.
 
When i remove /etc/pve/nodes/oldnode, problem solved on cluster. However, the problem persists on the server that has been removed from the cluster.
You also need to remove the folders of the other nodes under /etc/pve/nodes/ on the node that you removed from the cluster.
 
As far as I understand this is the node that got separated, please run it on a node that is still part of the cluster.

Please also try the advice posted by mira above first, since this might already resolve your problem.
Code:
Cluster information
-------------------
Name:             PRX
Config Version:   16
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Tue Dec  6 15:44:15 2022
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          0x00000002
Ring ID:          1.180d
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   2
Highest expected: 2
Total votes:      2
Quorum:           2
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 10.***** (This is another running node. The deleted node does not appear here.)
0x00000002          1 10.***** (local)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!