Hi,
Installed a second PVE on another Hardware to test the Cluster Options. (Two Node Cluster - just to Test and probably migrate to a new Hardware).
Now I try to remove the Whole Cluster again to have a single node running again.
I have joined the Second system and hava a Running Cluster:
Node1 (PVE):
root@pve:~# pvecm status
Cluster information
-------------------
Name: pmxclusterpe
Config Version: 2
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Sun Dec 10 12:59:13 2023
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000001
Ring ID: 1.40
Quorate: Yes
Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 2
Quorum: 2
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000001 1 192.168.99.17 (local)
0x00000002 1 192.168.99.18
Node2 PVEHP:
root@pvehp:~# pvecm status
Cluster information
-------------------
Name: pmxclusterpe
Config Version: 2
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Sun Dec 10 13:00:31 2023
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000002
Ring ID: 1.49
Quorate: Yes
Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 2
Quorum: 2
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000001 1 192.168.99.17
0x00000002 1 192.168.99.18 (local)
So when I Try to remove the second node "PVEHP" again wirh "pvecm delnode pvehp" I get the following error:
trying to acquire cfs lock 'file-corosync_conf' ...
Killing node 2
unable to open file '/etc/pve/corosync.conf.new.tmp.7202' - Permission denied
if I restart pve-cluster and corsync service on the node I Try to delete the Cluster is running again and everything works.
So my Question is how do I remove/delete the whole Cluster to get the Status bevore installing the Cluster, a running one Node System.
The Trick with deleteing the corosync.conf as described in https://pve.proxmox.com/wiki/Cluster_Manager#_remove_a_cluster_node does not work under 8.1.3 because the /etc/ directory is not available when corosync is not running.
(and if it is running corosync.cfg is write-protected)
I don´t think adding a third node to the cluster will solve my problem because if i remove one node I am back to a two node Cluster again...
I would really appreciate help with this problem.
Installed a second PVE on another Hardware to test the Cluster Options. (Two Node Cluster - just to Test and probably migrate to a new Hardware).
Now I try to remove the Whole Cluster again to have a single node running again.
I have joined the Second system and hava a Running Cluster:
Node1 (PVE):
root@pve:~# pvecm status
Cluster information
-------------------
Name: pmxclusterpe
Config Version: 2
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Sun Dec 10 12:59:13 2023
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000001
Ring ID: 1.40
Quorate: Yes
Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 2
Quorum: 2
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000001 1 192.168.99.17 (local)
0x00000002 1 192.168.99.18
Node2 PVEHP:
root@pvehp:~# pvecm status
Cluster information
-------------------
Name: pmxclusterpe
Config Version: 2
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Sun Dec 10 13:00:31 2023
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000002
Ring ID: 1.49
Quorate: Yes
Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 2
Quorum: 2
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000001 1 192.168.99.17
0x00000002 1 192.168.99.18 (local)
So when I Try to remove the second node "PVEHP" again wirh "pvecm delnode pvehp" I get the following error:
trying to acquire cfs lock 'file-corosync_conf' ...
Killing node 2
unable to open file '/etc/pve/corosync.conf.new.tmp.7202' - Permission denied
if I restart pve-cluster and corsync service on the node I Try to delete the Cluster is running again and everything works.
So my Question is how do I remove/delete the whole Cluster to get the Status bevore installing the Cluster, a running one Node System.
The Trick with deleteing the corosync.conf as described in https://pve.proxmox.com/wiki/Cluster_Manager#_remove_a_cluster_node does not work under 8.1.3 because the /etc/ directory is not available when corosync is not running.
(and if it is running corosync.cfg is write-protected)
I don´t think adding a third node to the cluster will solve my problem because if i remove one node I am back to a two node Cluster again...
I would really appreciate help with this problem.
Last edited: