I've got a PVE 6.2-10 cluster with 3 nodes. It's using local ZFS with storage replication between nodes.
A number of things are changing in the environment, so I need to break the cluster apart and run everything on the 1 node.
I moved everything to node 1. I then turned off nodes 2 and 3. At this point I can't edit anything in the configuration from node 1 (disable certain backups, replications jobs, etc). If I attempt to I get the cfs-locked error mentioned.
I attempt to follow these instructions HERE to remove nodes 2 and 3 from the cluster.
However the nodes aren't even showing up and I can't remove them.
My guess is that since my cluster was 3 nodes and I took both offline together, the cluster went into read only mode since it can't have quorum now. My guess is if I restart one of the other nodes back up, it will switch out of read only mode. And perhaps then I would be able to remove the other node, then turn off the node again and remove it. Basically remove each node 1 at a time. Would that work?
Is there perhaps another approach where I don't have to rack the server again and boot it?
Thank You.
A number of things are changing in the environment, so I need to break the cluster apart and run everything on the 1 node.
I moved everything to node 1. I then turned off nodes 2 and 3. At this point I can't edit anything in the configuration from node 1 (disable certain backups, replications jobs, etc). If I attempt to I get the cfs-locked error mentioned.
I attempt to follow these instructions HERE to remove nodes 2 and 3 from the cluster.
However the nodes aren't even showing up and I can't remove them.
Code:
root@pve01:~# pvecm nodes
Membership information
----------------------
Nodeid Votes Name
3 1 pve01 (local)
root@pve01:~# pvecm delnode pve02
cluster not ready - no quorum?
root@pve01:~# pvecm status
Cluster information
-------------------
Name: pve-cluster01
Config Version: 3
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Tue Jan 19 02:35:13 2021
Quorum provider: corosync_votequorum
Nodes: 1
Node ID: 0x00000003
Ring ID: 3.105422
Quorate: No
Votequorum information
----------------------
Expected votes: 3
Highest expected: 3
Total votes: 1
Quorum: 2 Activity blocked
Flags:
Membership information
----------------------
Nodeid Votes Name
0x00000003 1 192.168.0.244 (local)
My guess is that since my cluster was 3 nodes and I took both offline together, the cluster went into read only mode since it can't have quorum now. My guess is if I restart one of the other nodes back up, it will switch out of read only mode. And perhaps then I would be able to remove the other node, then turn off the node again and remove it. Basically remove each node 1 at a time. Would that work?
Is there perhaps another approach where I don't have to rack the server again and boot it?
Thank You.
Last edited: