[SOLVED] WARN: non-default quorum_votes distribution detected!

ptrj

Member
Dec 16, 2020
3
0
21
45
Hi,

I manage two different pve clusters on different sites.
When running 'pve7to8' on one of the sites I got the following warning:

Analzying quorum settings and state..
WARN: non-default quorum_votes distribution detected! <==== The warning I don't know how to solve.
INFO: configured votes - nodes: 3
INFO: configured votes - qdevice: 0
INFO: current expected votes: 3
INFO: current total votes: 3

pvecm status
Cluster information
-------------------
Name: srv-vmm-cl-01
Config Version: 40
Transport: knet
Secure auth: on

Quorum information
------------------
Date: Thu Dec 14 15:17:13 2023
Quorum provider: corosync_votequorum
Nodes: 4
Node ID: 0x00000001
Ring ID: 1.11b66
Quorate: Yes

Votequorum information
----------------------
Expected votes: 3
Highest expected: 3
Total votes: 3
Quorum: 2
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 172.20.0.21 (local)
0x00000002 1 172.20.0.22
0x00000003 1 172.20.0.23
0x00000004 0 172.20.0.24

systemctl status corosync.service
● corosync.service - Corosync Cluster Engine
Loaded: loaded (/lib/systemd/system/corosync.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2023-12-13 16:16:20 CET; 23h ago
Docs: man:corosync
man:corosync.conf
man:corosync_overview
Main PID: 5659 (corosync)
Tasks: 9 (limit: 618365)
Memory: 140.4M
CPU: 14min 20.942s
CGroup: /system.slice/corosync.service
└─5659 /usr/sbin/corosync -f

Dec 13 16:16:24 pve-hst-01 corosync[5659]: [TOTEM ] A new membership (1.11b66) was formed. Members joined: 2 3 4
Dec 13 16:16:24 pve-hst-01 corosync[5659]: [QUORUM] This node is within the primary component and will provide service.
Dec 13 16:16:24 pve-hst-01 corosync[5659]: [QUORUM] Members[4]: 1 2 3 4
Dec 13 16:16:24 pve-hst-01 corosync[5659]: [MAIN ] Completed service synchronization, ready to provide service.
Dec 14 13:25:42 pve-hst-01 corosync[5659]: [CFG ] Config reload requested by node 1
Dec 14 13:25:42 pve-hst-01 corosync[5659]: [TOTEM ] Configuring link 0
Dec 14 13:25:42 pve-hst-01 corosync[5659]: [TOTEM ] Configured link number 0: local addr: 172.20.0.21, port=5405
Dec 14 13:25:42 pve-hst-01 corosync[5659]: [TOTEM ] Configuring link 1
Dec 14 13:25:42 pve-hst-01 corosync[5659]: [TOTEM ] Configured link number 1: local addr: 172.20.70.21, port=5406
Dec 14 13:25:42 pve-hst-01 corosync[5659]: [KNET ] pmtud: MTU manually set to: 0

journalctl -b -u pve-cluster
Dec 14 13:25:42 pve-hst-01 pmxcfs[4769]: [status] notice: update cluster info (cluster name srv-vmm-cl-01, version = 40)
Dec 14 13:30:20 pve-hst-01 pmxcfs[4769]: [status] notice: received log
Dec 14 13:46:20 pve-hst-01 pmxcfs[4769]: [status] notice: received log
Dec 14 14:02:20 pve-hst-01 pmxcfs[4769]: [status] notice: received log
Dec 14 14:16:19 pve-hst-01 pmxcfs[4769]: [dcdb] notice: data verification successful
Dec 14 14:18:20 pve-hst-01 pmxcfs[4769]: [status] notice: received log
Dec 14 14:34:20 pve-hst-01 pmxcfs[4769]: [status] notice: received log
Dec 14 14:50:20 pve-hst-01 pmxcfs[4769]: [status] notice: received log
Dec 14 15:16:19 pve-hst-01 pmxcfs[4769]: [dcdb] notice: data verification successful

corosync.conf
logging {
debug: off
to_syslog: yes
}

nodelist {
node {
name: pve-hst-01
nodeid: 1
quorum_votes: 1
ring0_addr: 172.20.0.21
ring1_addr: 172.20.70.21
}
node {
name: pve-hst-02
nodeid: 2
quorum_votes: 1
ring0_addr: 172.20.0.22
ring1_addr: 172.20.70.22
}
node {
name: pve-hst-03
nodeid: 3
quorum_votes: 1
ring0_addr: 172.20.0.23
ring1_addr: 172.20.70.23
}
node {
name: pve-hst-04
nodeid: 4
quorum_votes: 0
ring0_addr: 172.20.0.24
ring1_addr: 172.20.70.24
}
}

quorum {
provider: corosync_votequorum
}

totem {
cluster_name: srv-vmm-cl-01
config_version: 40
interface {
linknumber: 0
}
interface {
linknumber: 1
}
ip_version: ipv4
link_mode: passive
secauth: on
version: 2
}

corosync-cfgtool -n
Local node ID 1, transport knet
nodeid: 2 reachable
LINK: 0 udp (172.20.0.21->172.20.0.22) enabled connected mtu: 1397
LINK: 1 udp (172.20.70.21->172.20.70.22) enabled connected mtu: 1397

nodeid: 3 reachable
LINK: 0 udp (172.20.0.21->172.20.0.23) enabled connected mtu: 1397
LINK: 1 udp (172.20.70.21->172.20.70.23) enabled connected mtu: 1397

nodeid: 4 reachable
LINK: 0 udp (172.20.0.21->172.20.0.24) enabled connected mtu: 1397
LINK: 1 udp (172.20.70.21->172.20.70.24) enabled connected mtu: 1397

corosync-cfgtool -s
Local node ID 1, transport knet
LINK ID 0 udp
addr = 172.20.0.21
status:
nodeid: 1: localhost
nodeid: 2: connected
nodeid: 3: connected
nodeid: 4: connected
LINK ID 1 udp
addr = 172.20.70.21
status:
nodeid: 1: localhost
nodeid: 2: connected
nodeid: 3: connected
nodeid: 4: connected

The other site which only involves 3 nodes doesn't show this warning. Other then that everything is the same.

Edit:
Added pveversion -v
proxmox-ve: 7.4-1 (running kernel: 5.15.131-2-pve)
pve-manager: 7.4-17 (running version: 7.4-17/513c62be)
pve-kernel-5.15: 7.4-9
pve-kernel-5.15.131-2-pve: 5.15.131-3
pve-kernel-5.15.116-1-pve: 5.15.116-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.15.60-1-pve: 5.15.60-1
pve-kernel-5.15.53-1-pve: 5.15.53-1
pve-kernel-5.15.35-2-pve: 5.15.35-5
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph-fuse: 15.2.16-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx4
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4.1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-2
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.7
libpve-storage-perl: 7.4-3
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
openvswitch-switch: 2.15.0+ds1-2+deb11u4
proxmox-backup-client: 2.4.4-1
proxmox-backup-file-restore: 2.4.4-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.2
proxmox-widget-toolkit: 3.7.3
pve-cluster: 7.3-3
pve-container: 4.4-6
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-4~bpo11+1
pve-firewall: 4.3-5
pve-firmware: 3.6-6
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-2
qemu-server: 7.4-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.14-pve1
 
Last edited:
Hi,

I manage two different pve clusters on different sites.
When running 'pve7to8' on one of the sites I got the following warning:

Analzying quorum settings and state..
WARN: non-default quorum_votes distribution detected! <==== The warning I don't know how to solve.
INFO: configured votes - nodes: 3
INFO: configured votes - qdevice: 0
INFO: current expected votes: 3
INFO: current total votes: 3

pvecm status

172.20.0.24 has ZERO votes?
 
yes. that is correct!
I do all my tests on this particular machine and also some passthrough
 
  • Like
Reactions: UdoB
darn. That was a quick answer. Thanks a lot! That's an answer I could not manage to find.
Would it be better with a 4 node solution to have a 2+2+2+1 vote solution?
Or should I just put on a blindfold and ignore it.

Thanks for your quick support.
 
darn. That was a quick answer. Thanks a lot! That's an answer I could not manage to find.
Would it be better with a 4 node solution to have a 2+2+2+1 vote solution?
Or should I just put on a blindfold and ignore it.

Thanks for your quick support.
Well, anytime there will be anything other than 1 vote per node, you get that warning. It is just a warning and at the face value of it, it literally says something else than default vote distribution is in the cluster. Other than that it's posted/logged, there's no consequences to that warning at that point in time.

If you changed it to anything other than 1+1+1+1 you WILL BE getting the warning. I think the question is why are you worried to have 4x1 vote. On a separate note for the OCD satisfaction, the cleanest setup would be to have +1 QDevice. A hack would be to patch the script. Most people would just ignore it (it's enough you acknowledged it by going to check what's going on, basically). But I have to say, 0 votes is a first for me. I artificially give 2 e.g. to selected node when I have only two. With 3+, I would not bother with these at all.
 
Last edited: