[SOLVED] No quorum error

proxmox is designed by diletants, to put node aivalability and managment into one protocol is not even acceptable in home systems sometimes, there are dozens of structural design faults what makes the system totally not usable for enterprise scenarios.
this works for me, two nodes, one it's dead.
 
Oh, you only have 2 node and 1 it's died

can view that
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pvecm_separate_node_without_reinstal

Code:
systemctl stop pve-cluster corosync
pmxcfs -l
rm /etc/corosync/*
rm /etc/pve/corosync.conf
killall pmxcfs
systemctl start pve-cluster
Brilliant. I had a node stuck in a cluster by itself for YEARS

Could not clear anything inside of /etc/pve/* because of having “no quorum”. The pmxcfs command forced a r/w mount on the directory and allowed me to completely clear the rest of the node settings

Couldn’t thank you enough!
 
  • Like
Reactions: gomezit
Brilliant. I had a node stuck in a cluster by itself for YEARS

Could not clear anything inside of /etc/pve/* because of having “no quorum”. The pmxcfs command forced a r/w mount on the directory and allowed me to completely clear the rest of the node settings

Couldn’t thank you

Brilliant. I had a node stuck in a cluster by itself for YEARS

Could not clear anything inside of /etc/pve/* because of having “no quorum”. The pmxcfs command forced a r/w mount on the directory and allowed me to completely clear the rest of the node settings

Couldn’t thank you enough!
pff after hours looking for a solution.. thank you!
 
  • Like
Reactions: tylerdebois
It was a crazy ending to the thread!

I am running a home environment. I don't have shared storage.

Lots of ideas here.


I guess I am wondering. I don't have shared storage. What is best to not get locked up the cluster? Basically just trying to have one management area for all nodes. Add shared storage once, one area for backup jobs... Etc.


Code:
pvecm expect 1

On startup cron or something?

Id like to have 5. But 2 of them actually count, and the other 3 are just in random states of on and off.


Or this seemed reasonable too.

Bash:
nano /etc/pve/corosync.conf


Locate the section for the "Quorum Policy" and change the value to "ignore". The section should look like this:

yaml


quorum {
provider: corosync_votequorum
expected_votes: 1
two_node: 0
wait_for_all: 0
}
 
Hello,

In my home lab environment, I add a dumb PC to a cluster, just to check how it's working.
But this PC has no power enough so it can't even run any VM on it.
Removing it and resetting cluster is not that easy.
As I'm running some useful services, I don't want to restart from scratch.

Thanks to @ColeTrain, here is a synthesis of the easier way (for me) to fix this issue :

Code:
cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
nano /etc/pve/corosync.conf.new

quorum {
provider: corosync_votequorum
expected_votes: 1
two_node: 0
wait_for_all: 0
}

cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf

To check if it's working fine :
Code:
systemctl status corosync
journalctl -b -u corosync
pvecm status

and if needed :
Code:
systemctl restart corosync
 
  • Like
Reactions: miztaq
quorum { provider: corosync_votequorum expected_votes: 1 two_node: 0 wait_for_all: 0 }
Changing this setting is only for troubleshooting, not for running in production, even in a homelab. Seriously if you don't need a cluster or don't have the Hardware for it: Don't cluster. You will only make your life harder without any benefit
 
Last edited:
  • Like
Reactions: 6ril and UdoB
Changing this setting is only for troubleshooting, not for running in production, even in a homelab. Seriously if you don't need a cluster or don't have the Hardware for it: Don't cluster. You will only make your life harder without any benefit
Ok, thanks for the advice.
I'll need cluster as I'm running my e-mail server and other stuff on it.
But I still have to build another server.

So to fix that temporary issue, it's not that bad, no ? Can that configuration make trouble while upgrading or running other operation ?
In any case, if my server “decide” to die, right now, I have restored from backup manually.