[SOLVED] Can't remove nodes, cfs-locked 'file-replication_cfg' operation: no quorum!

jayg30

Member
Nov 8, 2017
50
4
13
38
I've got a PVE 6.2-10 cluster with 3 nodes. It's using local ZFS with storage replication between nodes.
A number of things are changing in the environment, so I need to break the cluster apart and run everything on the 1 node.

I moved everything to node 1. I then turned off nodes 2 and 3. At this point I can't edit anything in the configuration from node 1 (disable certain backups, replications jobs, etc). If I attempt to I get the cfs-locked error mentioned.

I attempt to follow these instructions HERE to remove nodes 2 and 3 from the cluster.
However the nodes aren't even showing up and I can't remove them.

Code:
root@pve01:~# pvecm nodes
Membership information
----------------------
    Nodeid      Votes Name
         3          1 pve01 (local)

root@pve01:~# pvecm delnode pve02
cluster not ready - no quorum?

root@pve01:~# pvecm status
Cluster information
-------------------
Name:             pve-cluster01
Config Version:   3
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Tue Jan 19 02:35:13 2021
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000003
Ring ID:          3.105422
Quorate:          No

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      1
Quorum:           2 Activity blocked
Flags:

Membership information
----------------------
    Nodeid      Votes Name
0x00000003          1 192.168.0.244 (local)

My guess is that since my cluster was 3 nodes and I took both offline together, the cluster went into read only mode since it can't have quorum now. My guess is if I restart one of the other nodes back up, it will switch out of read only mode. And perhaps then I would be able to remove the other node, then turn off the node again and remove it. Basically remove each node 1 at a time. Would that work?

Is there perhaps another approach where I don't have to rack the server again and boot it?

Thank You.
 
Last edited:
Hi,
you can use pvecm expected 1 to set the number of expected votes to 1.
 
Hi,
you can use pvecm expected 1 to set the number of expected votes to 1.
Thanks. I had to run these commands.
Code:
root@pve01:~# pvecm expected 1
root@pve01:~# pvecm delnode pve02
root@pve01:~# pvecm expected 1
root@pve01:~# pvecm delnode pve03

Setting it to expected 1 wouldn't remove the old nodes, so I had to run delnode. The first time this reset the quorum and blocked me. So had to do it again for the other node.

Everything looks correct in the summary (1 node).
However the side pane is still showing the old nodes.

Screenshot 2021-01-21 000156.jpg
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!