Waiting for quorum

Numerix

New Member
Nov 15, 2024
9
0
1
Hi,

I have a problem I don't understand, thank you in advance for any help

I have a 2 node cluster without HA distributed over 2 sites linked by a VPN.
On each site 1 PVE and 1 PBS.
Each PVE has 3 network interfaces:
  • one for PVE
  • one for backup
  • one for the vm network
Between the main site and the remote site, all interfaces ping each other.

Everything was working normally, but for some time now each node has not been able to see its partner.
 
Site A


pveversion -v


Code:
proxmox-ve: 8.3.0 (running kernel: 6.8.12-2-pve)

pve-manager: 8.3.3 (running version: 8.3.3/f157a38b211595d6)

proxmox-kernel-helper: 8.1.0

proxmox-kernel-6.8: 6.8.12-8

proxmox-kernel-6.8.12-8-pve-signed: 6.8.12-8

proxmox-kernel-6.8.12-2-pve-signed: 6.8.12-2

proxmox-kernel-6.8.8-3-pve-signed: 6.8.8-3

proxmox-kernel-6.8.8-2-pve-signed: 6.8.8-2

proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2

ceph-fuse: 17.2.7-pve3

corosync: 3.1.7-pve3

criu: 3.17.1-2+deb12u1

glusterfs-client: 10.3-5

ifupdown2: 3.2.0-1+pmx11

ksm-control-daemon: 1.5-1

libjs-extjs: 7.0.0-5

libknet1: 1.28-pve1

libproxmox-acme-perl: 1.5.1

libproxmox-backup-qemu0: 1.5.1

libproxmox-rs-perl: 0.3.4

libpve-access-control: 8.2.0

libpve-apiclient-perl: 3.3.2

libpve-cluster-api-perl: 8.0.10

libpve-cluster-perl: 8.0.10

libpve-common-perl: 8.2.9

libpve-guest-common-perl: 5.1.6

libpve-http-server-perl: 5.2.0

libpve-network-perl: 0.10.0

libpve-rs-perl: 0.9.1

libpve-storage-perl: 8.3.3

libspice-server1: 0.15.1-1

lvm2: 2.03.16-2

lxc-pve: 6.0.0-1

lxcfs: 6.0.0-pve2

novnc-pve: 1.5.0-1

proxmox-backup-client: 3.3.2-1

proxmox-backup-file-restore: 3.3.2-2

proxmox-firewall: 0.6.0

proxmox-kernel-helper: 8.1.0

proxmox-mail-forward: 0.3.1

proxmox-mini-journalreader: 1.4.0

proxmox-offline-mirror-helper: 0.6.7

proxmox-widget-toolkit: 4.3.4

pve-cluster: 8.0.10

pve-container: 5.2.4

pve-docs: 8.3.1

pve-edk2-firmware: 4.2023.08-4

pve-esxi-import-tools: 0.7.2

pve-firewall: 5.1.0

pve-firmware: 3.14-3

pve-ha-manager: 4.0.6

pve-i18n: 3.3.3

pve-qemu-kvm: 9.0.2-5

pve-xtermjs: 5.3.0-3

qemu-server: 8.3.8

smartmontools: 7.3-pve1

spiceterm: 3.3.0

swtpm: 0.8.0+pve1

vncterm: 1.8.0

zfsutils-linux: 2.2.7-pve1
 
pvecm status


Code:
Cluster information

-------------------

Name:             CLUSTER

Config Version:   3

Transport:        knet

Secure auth:      on


Quorum information

------------------

Date:             Wed Feb 26 09:33:40 2025

Quorum provider:  corosync_votequorum

Nodes:            1

Node ID:          0x00000001

Ring ID:          1.10bf

Quorate:          Yes


Votequorum information

----------------------

Expected votes:   3

Highest expected: 3

Total votes:      2

Quorum:           2 

Flags:            Quorate


Membership information

----------------------

    Nodeid      Votes Name

0x00000001          2 192.168.254.112 (local)
 
Site B

pveversion -v

Code:
proxmox-ve: 8.3.0 (running kernel: 6.8.12-8-pve)
pve-manager: 8.3.3 (running version: 8.3.3/f157a38b211595d6)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.12-8
proxmox-kernel-6.8.12-8-pve-signed: 6.8.12-8
proxmox-kernel-6.8.12-2-pve-signed: 6.8.12-2
proxmox-kernel-6.8.8-3-pve-signed: 6.8.8-3
proxmox-kernel-6.8.8-2-pve-signed: 6.8.8-2
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2+deb12u1
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.5.1
libproxmox-rs-perl: 0.3.4
libpve-access-control: 8.2.0
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.10
libpve-cluster-perl: 8.0.10
libpve-common-perl: 8.2.9
libpve-guest-common-perl: 5.1.6
libpve-http-server-perl: 5.2.0
libpve-network-perl: 0.10.0
libpve-rs-perl: 0.9.1
libpve-storage-perl: 8.3.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.5.0-1
proxmox-backup-client: 3.3.2-1
proxmox-backup-file-restore: 3.3.2-2
proxmox-firewall: 0.6.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.3.1
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.4
pve-cluster: 8.0.10
pve-container: 5.2.4
pve-docs: 8.3.1
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.2
pve-firewall: 5.1.0
pve-firmware: 3.14-3
pve-ha-manager: 4.0.6
pve-i18n: 3.3.3
pve-qemu-kvm: 9.0.2-5
pve-xtermjs: 5.3.0-3
qemu-server: 8.3.8
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.7-pve1
 
pvecm status

Code:
Cluster information
-------------------
Name:             CLUSTER
Config Version:   3
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Wed Feb 26 09:31:21 2025
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000002
Ring ID:          2.10ce
Quorate:          No

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      1
Quorum:           2 Activity blocked
Flags:           

Membership information
----------------------
    Nodeid      Votes Name
0x00000002          1 192.168.253.112 (local)
 
it doesn't inspire anyone, or did I misspeak?
I did not look at the details and did not have time earlier today. Two node clusters are problematic as both don't have quorum when the other is down (or network disconnect). Nodes that are not close together are problematic since corosync requires (very) low latency (otherwise both nodes reboot). I fear that your setup is recommended against on this forum and in the documentation.
 
Last edited:
Thanks for the answer.
I know, but the choice of 2 PVE nodes was imposed by the IT department.
It worked fine for several weeks and since then I can't find the cause.
The latency is 2-3 ms, we have a 1 Gbps dedicated fiber link.
It's very embarrassing.