Thanks Matthias,Yes. You need quorum to log into the GUI. If you want to make modifications, one option is an external quorum device: https://pve.proxmox.com/wiki/Cluster_Manager#_corosync_external_vote_support
pvecm delnode <NAME>
with running VMs on the machine. Obviously, I did not shutdown the machine either. Strangely, on another node my backups do not work either now (either I am only backing up to the local drive in the same physical machine). What can I do now to somehow fix this best?It seems what you have already done, removing the second node from your cluster and resetting all corosync configurations there. (And perhaps adding it again after that)What can I do now to somehow fix this best?
It sounds like there is still a remnant configurations folder for the other node on each node. However, to make sure please first post the output ofI read I have to remove a folder?
pvecm status
and cat /etc/pve/corosync.conf
.Thinking in terms of slave and master is not useful when working with a Proxmox cluster. Each node should be able to do any management task.OK, I could now restore my slave using this post:
pvecm status
Cluster information
-------------------
Name: node1
Config Version: 2
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Mon Jul 4 18:02:49 2022
Quorum provider: corosync_votequorum
Nodes: 1
Node ID: 0x00000001
Ring ID: 1.6a
Quorate: Yes
Votequorum information
----------------------
Expected votes: 1
Highest expected: 1
Total votes: 1
Quorum: 1
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000001 1 10.0.3.2 (local)
cat /etc/pve/corosync.conf
logging {
debug: off
to_syslog: yes
}
nodelist {
node {
name: node1
nodeid: 1
quorum_votes: 1
ring0_addr: 10.0.3.2
}
node {
name: node2
nodeid: 2
quorum_votes: 1
ring0_addr: 10.0.3.3
}
}
quorum {
provider: corosync_votequorum
}
totem {
cluster_name: node1
config_version: 2
interface {
linknumber: 0
}
ip_version: ipv4-6
link_mode: passive
secauth: on
version: 2
}
pvecm delnode
for the first time, the node should have been removed from the corosync config already./etc/pve/nodes/<nodename>
. After that you should be able to add the node to the cluster again, if you want to do so.That worked perfectly. All solved. Thanks a ton. Great forum!Are both outputs from node1? The output of both commands from node2 could also be useful here.
It is a bit weird though, now that I read it again.
1. In the very beginning when you issuedpvecm delnode
for the first time, the node should have been removed from the corosync config already.
2. As far as I am aware, the GUI uses the corosync config to determine what to display in the GUI. The commands from the post you linked above should have also removed the corosync config on node2, though. It seems like they accidentally synced again (?)
I'd advise you to again try the steps in the post above again and separate the node without reinstalling.
When the nodes no longer see each other and are no longer visible in the config, it should be safe to remove the configurations folder in/etc/pve/nodes/<nodename>
. After that you should be able to add the node to the cluster again, if you want to do so.
We use essential cookies to make this site work, and optional cookies to enhance your experience.