Quorum: 2 Activity blocked

bo.oss.ano

New Member
Apr 18, 2022
4
0
1
Hi,

I have a setup with 2 Nodes and recently I've observed that below status is shown when running "pvecm status" and in the GUI one of the Nodes have "red status"

root@carney:~# pvecm status
Quorum information
------------------
Date: Mon Apr 18 22:25:37 2022
Quorum provider: corosync_votequorum
Nodes: 1
Node ID: 0x00000001
Ring ID: 1/21972580
Quorate: No

Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 1
Quorum: 2 Activity blocked
Flags:

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 172.31.240.153 (local)

Also, I've observed that the "red status Node" can't be access by SSH using its external IP, only by its private/internal IP.

Can you please support to troubleshoot?

Thanks!
 
In Proxmox you need majority to get quorum. If you have a 2 node cluster and only one node is available, that's 50% of nodes and is *NOT* majority. Same happens with any even number of nodes.

In your case, the node you ran pvecm status on told you that it was expecting 2 votes but the total current votes is just 1, meaning that node is out of quorum and it's activity is blocked.

Did you ran pvecm status on the other node? You should get similar output but with a different "Membership information".

Take a look at your /etc/pve/corosync.conf file. There you will have the IP(s) used by your setup for cluster communications (in nodelist, in each node, ringX_address).

Check that you can reach each node from the other one. Also check if you have enabled firewall in the nodes.
 
Thanks for your feedback!

Please check below the status for both Nodes.

(Working node)
root@charcot:~# pvecm status
Quorum information
------------------
Date: Tue Apr 19 12:00:32 2022
Quorum provider: corosync_votequorum
Nodes: 1
Node ID: 0x00000002
Ring ID: 2/21972504
Quorate: No

Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 1
Quorum: 2 Activity blocked
Flags:

Membership information
----------------------
Nodeid Votes Name
0x00000002 1 172.31.240.154 (local)


(Not working node)
root@carney:~# pvecm status
Quorum information
------------------
Date: Tue Apr 19 12:00:58 2022
Quorum provider: corosync_votequorum
Nodes: 1
Node ID: 0x00000001
Ring ID: 1/21972580
Quorate: No

Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 1
Quorum: 2 Activity blocked
Flags:

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 172.31.240.153 (local)


Both Nodes have firewall enabled.

Nodes can reach each other only by private/internal IPs (192.168.100.x) and not by IPs that are shown in "pvecm status", meaning (172.31.240.153 and 172.31.240.154)

And below the output of /etc/pve/corosync.conf

root@carney:~# cat /etc/pve/corosync.conf
logging {
debug: off
to_syslog: yes
}

nodelist {
node {
name: charcot
nodeid: 2
quorum_votes: 1
ring0_addr: charcot
}

node {
name: carney
nodeid: 1
quorum_votes: 1
ring0_addr: carney
}

}

quorum {
provider: corosync_votequorum
}

totem {
cluster_name: raindrop
config_version: 2
ip_version: ipv4
secauth: on
version: 2
interface {
bindnetaddr: 172.31.240.153
ringnumber: 0
}

}
 
root@charcot:~# pveversion -v
proxmox-ve: 4.3-70 (running kernel: 4.4.21-1-pve)
pve-manager: 4.3-7 (running version: 4.3-7/db02a4de)
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.4.21-1-pve: 4.4.21-70
pve-kernel-4.2.8-1-pve: 4.2.8-41
pve-kernel-4.4.19-1-pve: 4.4.19-66
pve-kernel-4.4.10-1-pve: 4.4.10-54
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-46
qemu-server: 4.0-92
pve-firmware: 1.1-10
libpve-common-perl: 4.0-76
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-67
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-docs: 4.3-12
pve-qemu-kvm: 2.7.0-4
pve-container: 1.0-78
pve-firewall: 2.0-31
pve-ha-manager: 1.0-35
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.5-1
lxcfs: 2.0.4-pve2
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.8-pve12~bpo80
drbdmanage: 0.97.3-1

This is the version on which Proxmox was installed and not updated.
 
Did you read the link I gave you? Understand how it works or you'll never sort this out properly.

How are the names "charcot" and "carney" resolved in each node? Using /etc/hosts? Dns?

Which IP are they resolved to on each host?

Corosync will use the names in its config and resolve them to an IP. If your firewall does not allow cluster traffic using such IP's... well, you have no cluster :)

Either allow cluster traffic on the IPs your corosync configured hostnames resolve to OR change the IP the hostnames resolve to (this may lead to other implications).
 
The names "charcot" and "carney" are resolved by /etc/hosts
And they are resolved to private/internal IPs (192.168.100.X)

I will try configuration to use those private IPs instead of hostnames
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!