Cluster not quorate - extending auth key lifetime!

HKRH

New Member
Oct 4, 2019
3
0
1
53
Hello together,

we have a 2-node cluster without HA, named pve1 and pve2. pve1 replicates VMs to pve2. Both System run well until last week.
Both Systems run with Proxmox VE 6.0-2, PVE-Manager 6.0-4, PVE-Kernel 6.0-5

Now we get the following error messages

on pve1:
Nov 29 15:31:06 pve1 corosync[2244]: [TOTEM ] A new membership (1:4014312) was formed. Members
Nov 29 15:31:06 pve1 corosync[2244]: [CPG ] downlist left_list: 0 received
Nov 29 15:31:06 pve1 corosync[2244]: [QUORUM] Members[1]: 1
Nov 29 15:31:06 pve1 corosync[2244]: [MAIN ] Completed service synchronization, ready to provide service.
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:07 pve1 pvesr[5238]: trying to acquire cfs lock 'file-replication_cfg' ...
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!

on pve2:
Nov 29 15:31:24 pve2 pveproxy[2058]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:25 pve2 pveproxy[2058]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:25 pve2 pveproxy[2058]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:26 pve2 pveproxy[2058]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:26 pve2 pveproxy[2058]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:27 pve2 pveproxy[2058]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:27 pve2 pveproxy[2058]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:28 pve2 pveproxy[2058]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:29 pve2 pveproxy[2058]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:29 pve2 pveproxy[2058]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:29 pve2 pveproxy[2058]: Cluster not quorate - extending auth key lifetime!

pve1 is marked green in the Tree, pve2 is marked with an red cross.

when running pvecm status we get this reply on both machines:

root@pve2:~# pvecm status
Quorum information
------------------
Date: Fri Nov 29 15:35:07 2019
Quorum provider: corosync_votequorum
Nodes: 1
Node ID: 0x00000002
Ring ID: 2/332
Quorate: No

Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 1
Quorum: 2 Activity blocked
Flags:

Membership information
----------------------
Nodeid Votes Name
0x00000002 1 192.168.9.2 (local)


Any idea what happend?

Thank you very much for any help.

Greetings
Rudi
 
Last edited:
I'm having the same issue.


Code:
Nov 29 22:28:09 pve-vm09 systemd[1]: pvesr.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- The unit pvesr.service has entered the 'failed' state with result 'exit-code'.
Nov 29 22:28:09 pve-vm09 systemd[1]: Failed to start Proxmox VE replication runner.
-- Subject: A start job for unit pvesr.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit pvesr.service has finished with a failure.
--
-- The job identifier is 7625264 and the job result is failed.
Nov 29 22:28:14 pve-vm09 pveproxy[24592]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:14 pve-vm09 pvedaemon[63575]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:14 pve-vm09 pveproxy[24594]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:14 pve-vm09 pveproxy[24593]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:26 pve-vm09 pveproxy[24593]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:26 pve-vm09 pvedaemon[61568]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:26 pve-vm09 pveproxy[24594]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:26 pve-vm09 pveproxy[24593]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:26 pve-vm09 pveproxy[24593]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:26 pve-vm09 pvedaemon[62769]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:34 pve-vm09 pveproxy[24594]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:34 pve-vm09 pvedaemon[63575]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:34 pve-vm09 pveproxy[24593]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:34 pve-vm09 pveproxy[24592]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:41 pve-vm09 pveproxy[24592]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:42 pve-vm09 pveproxy[24593]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:42 pve-vm09 pvedaemon[62769]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:42 pve-vm09 pveproxy[24594]: Cluster not quorate - extending auth key lifetime!
lines 1603-1643/1643 (END)


And because of this (I think), backups are failing.


Code:
INFO: Starting Backup of VM 100 (qemu)
INFO: Backup started at 2019-11-29 22:00:02
INFO: status = running
INFO: unable to open file '/etc/pve/nodes/pve-vm08/qemu-server/100.conf.tmp.1837' - Permission denied
INFO: update VM 100: -lock backup
ERROR: Backup of VM 100 failed - command 'qm set 100 --lock backup' failed: exit code 2
INFO: Failed at 2019-11-29 22:00:03
INFO: Starting Backup of VM 101 (qemu)
INFO: Backup started at 2019-11-29 22:00:04
INFO: status = running
INFO: unable to open file '/etc/pve/nodes/pve-vm08/qemu-server/101.conf.tmp.1841' - Permission denied
INFO: update VM 101: -lock backup
ERROR: Backup of VM 101 failed - command 'qm set 101 --lock backup' failed: exit code 2
INFO: Failed at 2019-11-29 22:00:05

Please let me know if you need more information.
 
please provide the output of systemctl status pve-cluster.service corosync.service on both nodes, as well as pveversion -v output.
 
Here are the outputs:

pve1-versions:
root@pve1:~# pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve)
pve-manager: 6.0-4 (running version: 6.0-4/2a719255)
pve-kernel-5.0: 6.0-5
pve-kernel-helper: 6.0-5
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-2
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-5
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-61
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-5
pve-cluster: 6.0-4
pve-container: 3.0-3
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-5
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-3
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-5
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve1

pve2-versions (identical):
root@pve2:~# pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve)
pve-manager: 6.0-4 (running version: 6.0-4/2a719255)
pve-kernel-5.0: 6.0-5
pve-kernel-helper: 6.0-5
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-2
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-5
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-61
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-5
pve-cluster: 6.0-4
pve-container: 3.0-3
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-5
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-3
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-5
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve1

pve1-status:
root@pve1:~# systemctl status pve-cluster.service corosync.service
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2019-11-03 14:08:08 CET; 4 weeks 0 days ago
Process: 2066 ExecStart=/usr/bin/pmxcfs (code=exited, status=0/SUCCESS)
Process: 2239 ExecStartPost=/usr/bin/pvecm updatecerts --silent (code=exited, status=0/SUCCESS)
Main PID: 2095 (pmxcfs)
Tasks: 13 (limit: 4915)
Memory: 52.6M
CGroup: /system.slice/pve-cluster.service
└─2095 /usr/bin/pmxcfs

Dec 01 23:08:08 pve1 pmxcfs[2095]: [dcdb] notice: data verification successful
Dec 02 00:08:08 pve1 pmxcfs[2095]: [dcdb] notice: data verification successful
Dec 02 01:08:08 pve1 pmxcfs[2095]: [dcdb] notice: data verification successful
Dec 02 02:08:07 pve1 pmxcfs[2095]: [dcdb] notice: data verification successful
Dec 02 03:08:07 pve1 pmxcfs[2095]: [dcdb] notice: data verification successful
Dec 02 04:08:07 pve1 pmxcfs[2095]: [dcdb] notice: data verification successful
Dec 02 05:08:07 pve1 pmxcfs[2095]: [dcdb] notice: data verification successful
Dec 02 06:08:08 pve1 pmxcfs[2095]: [dcdb] notice: data verification successful
Dec 02 07:08:07 pve1 pmxcfs[2095]: [dcdb] notice: data verification successful
Dec 02 08:08:08 pve1 pmxcfs[2095]: [dcdb] notice: data verification successful

● corosync.service - Corosync Cluster Engine
Loaded: loaded (/lib/systemd/system/corosync.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2019-11-03 14:08:08 CET; 4 weeks 0 days ago
Docs: man:corosync
man:corosync.conf
man:corosync_overview
Main PID: 2244 (corosync)
Tasks: 9 (limit: 4915)
Memory: 191.4M
CGroup: /system.slice/corosync.service
└─2244 /usr/sbin/corosync -f

Dec 02 08:55:56 pve1 corosync[2244]: [QUORUM] Members[1]: 1
Dec 02 08:55:56 pve1 corosync[2244]: [MAIN ] Completed service synchronization, ready to provide service.
Dec 02 08:55:58 pve1 corosync[2244]: [TOTEM ] A new membership (1:4670904) was formed. Members
Dec 02 08:55:58 pve1 corosync[2244]: [CPG ] downlist left_list: 0 received
Dec 02 08:55:58 pve1 corosync[2244]: [QUORUM] Members[1]: 1
Dec 02 08:55:58 pve1 corosync[2244]: [MAIN ] Completed service synchronization, ready to provide service.
Dec 02 08:55:59 pve1 corosync[2244]: [TOTEM ] A new membership (1:4670908) was formed. Members
Dec 02 08:55:59 pve1 corosync[2244]: [CPG ] downlist left_list: 0 received
Dec 02 08:55:59 pve1 corosync[2244]: [QUORUM] Members[1]: 1
Dec 02 08:55:59 pve1 corosync[2244]: [MAIN ] Completed service synchronization, ready to provide service.

pve2-status:
root@pve2:~# systemctl status pve-cluster.service corosync.service
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2019-11-29 15:13:35 CET; 2 days ago
Process: 1824 ExecStart=/usr/bin/pmxcfs (code=exited, status=0/SUCCESS)
Process: 2006 ExecStartPost=/usr/bin/pvecm updatecerts --silent (code=exited, status=0/SUCCESS)
Main PID: 1857 (pmxcfs)
Tasks: 13 (limit: 4915)
Memory: 45.6M
CGroup: /system.slice/pve-cluster.service
└─1857 /usr/bin/pmxcfs

Dec 01 23:13:34 pve2 pmxcfs[1857]: [dcdb] notice: data verification successful
Dec 02 00:13:34 pve2 pmxcfs[1857]: [dcdb] notice: data verification successful
Dec 02 01:13:34 pve2 pmxcfs[1857]: [dcdb] notice: data verification successful
Dec 02 02:13:34 pve2 pmxcfs[1857]: [dcdb] notice: data verification successful
Dec 02 03:13:34 pve2 pmxcfs[1857]: [dcdb] notice: data verification successful
Dec 02 04:13:34 pve2 pmxcfs[1857]: [dcdb] notice: data verification successful
Dec 02 05:13:34 pve2 pmxcfs[1857]: [dcdb] notice: data verification successful
Dec 02 06:13:34 pve2 pmxcfs[1857]: [dcdb] notice: data verification successful
Dec 02 07:13:34 pve2 pmxcfs[1857]: [dcdb] notice: data verification successful
Dec 02 08:13:34 pve2 pmxcfs[1857]: [dcdb] notice: data verification successful

● corosync.service - Corosync Cluster Engine
Loaded: loaded (/lib/systemd/system/corosync.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2019-11-29 15:13:36 CET; 2 days ago
Docs: man:corosync
man:corosync.conf
man:corosync_overview
Main PID: 2008 (corosync)
Tasks: 9 (limit: 4915)
Memory: 185.6M
CGroup: /system.slice/corosync.service
└─2008 /usr/sbin/corosync -f

Nov 30 22:55:54 pve2 corosync[2008]: [KNET ] link: host: 1 link: 0 is down
Nov 30 22:55:54 pve2 corosync[2008]: [KNET ] host: host: 1 (passive) best link: 0 (pri: 1)
Nov 30 22:55:54 pve2 corosync[2008]: [KNET ] host: host: 1 has no active links
Nov 30 22:55:55 pve2 corosync[2008]: [KNET ] rx: host: 1 link: 0 is up
Nov 30 22:55:55 pve2 corosync[2008]: [KNET ] host: host: 1 (passive) best link: 0 (pri: 1)
Dec 01 12:14:14 pve2 corosync[2008]: [KNET ] link: host: 1 link: 0 is down
Dec 01 12:14:14 pve2 corosync[2008]: [KNET ] host: host: 1 (passive) best link: 0 (pri: 1)
Dec 01 12:14:14 pve2 corosync[2008]: [KNET ] host: host: 1 has no active links
Dec 01 12:14:15 pve2 corosync[2008]: [KNET ] rx: host: 1 link: 0 is up
Dec 01 12:14:15 pve2 corosync[2008]: [KNET ] host: host: 1 (passive) best link: 0 (pri: 1)

Additional Informations:
- both Systems have 2 NICs (1 NIC as Business-Link (vmbr0), 1 NIC for Cluster(vmbr1))
- Cluster-Link is a separated VLAN Network-Connection (Port-VLAN over Managed Switch without VLAN-IDs) in different Sub-Net
- Both Connections are Up and working fine
- Shell over Web-UI will not open, Error in Cluster Log (putty works fine):
end task UPID: pve1:00005C80:0ED492B9:5DE4C868:vncshell::root@pam: command '/usr/bin/termproxy 5900 --path /nodes/pve1 --perm Sys.Console -- /bin/login -f root' failed: exit code 4

Hope the Information is ok. Feel free to ask for futher information.

Thank you very much for your efforts.

Best regards,
Rudi
 
@HKRH please upgrade to the current version (the version of corosync/knet you are running had quite some bugs!), which will also restart corosync and pve-cluster and hopefully resync the cluster file system. if you still experience issues afterwards, a full log ("journalctl -u corosync -u pve-cluster --since XXX" where XXX is the time of the upgrade) might shed some light.
 
Thank you,

I will do the Update on next weekend. I will give you response if it works or not.

Best regards,
Rudi
 
Apologies for the necropost, but how do I go about updating corosync?

I'm having the same issue and wish to apply the suggested solution. Running pveversion -v reveals that I am also running corosync: 3.0.2-pve2. However, when I apt update && apt upgrade, no new version of corosync is available (or any other updates).
 
Thank you, fabian. I'm adding a subscription, then will upgrade to 6.1 from the supported repos.
 
hola, muy buenas tardes
quise ingresar un nodo al archivo corosync.conf, y en la parte de grabar, me dejó con una X uno de los nodos que ya tenía un plan de forma correcta, no sé qué hacer, ya busque varias formas de poder hacerlo y no me deja

1606251455451.png


este es el error que manda
1606251601181.png

espero me puedan ayudar
saludos
 
Hello together,

we have a 2-node cluster without HA, named pve1 and pve2. pve1 replicates VMs to pve2. Both System run well until last week.
Both Systems run with Proxmox VE 6.0-2, PVE-Manager 6.0-4, PVE-Kernel 6.0-5

Now we get the following error messages

on pve1:


on pve2:


pve1 is marked green in the Tree, pve2 is marked with an red cross.

when running pvecm status we get this reply on both machines:




Any idea what happend?

Thank you very much for any help.

Greetings
Rudi

hola, muy buenas tardes,

el día de hoy me esta pasando lo mismo

encontraste alguna solución?

saludos y muchas gracias de antemano por leernos
:D
 
Hi Guys,

I have 3 nodes with ceph storage, I have some issue with Shell, please help

I have issue with shell errror : Undefined Code: 1006

pveproxy[24593]: Cluster not quorate - extending auth key lifetime!
pvedaemon[62769]: Cluster not quorate - extending auth key lifetime!

root@pve2:~# pveversion -v
proxmox-ve: 8.3.0 (running kernel: 6.8.12-8-pve)
pve-manager: 8.3.3 (running version: 8.3.3/f157a38b211595d6)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.12-8
proxmox-kernel-6.8.12-8-pve-signed: 6.8.12-8
proxmox-kernel-6.8.12-4-pve-signed: 6.8.12-4
ceph: 18.2.4-pve3
ceph-fuse: 18.2.4-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2+deb12u1
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.4
libpve-access-control: 8.2.0
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.10
libpve-cluster-perl: 8.0.10
libpve-common-perl: 8.2.9
libpve-guest-common-perl: 5.1.6
libpve-http-server-perl: 5.1.2
libpve-network-perl: 0.10.0
libpve-rs-perl: 0.9.1
libpve-storage-perl: 8.3.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.5.0-1
proxmox-backup-client: 3.3.2-1
proxmox-backup-file-restore: 3.3.2-2
proxmox-firewall: 0.6.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.3.1
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.4
pve-cluster: 8.0.10
pve-container: 5.2.3
pve-docs: 8.3.1
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.2
pve-firewall: 5.1.0
pve-firmware: 3.14-3
pve-ha-manager: 4.0.6
pve-i18n: 3.3.3
pve-qemu-kvm: 9.0.2-5
pve-xtermjs: 5.3.0-3
qemu-server: 8.3.6
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.7-pve1


root@pve2:~# systemctl status pve-cluster.service corosync.service
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; preset: enabled)
Active: active (running) since Fri 2025-02-07 19:24:53 IST; 2 days ago
Main PID: 1946 (pmxcfs)
Tasks: 11 (limit: 308962)
Memory: 53.2M
CPU: 2min 59.680s
CGroup: /system.slice/pve-cluster.service
└─1946 /usr/bin/pmxcfs

Feb 10 04:27:07 pve2 pmxcfs[1946]: [dcdb] notice: data verification successful
Feb 10 05:27:07 pve2 pmxcfs[1946]: [dcdb] notice: data verification successful
Feb 10 06:27:07 pve2 pmxcfs[1946]: [dcdb] notice: data verification successful
Feb 10 07:27:07 pve2 pmxcfs[1946]: [dcdb] notice: data verification successful
Feb 10 08:27:07 pve2 pmxcfs[1946]: [dcdb] notice: data verification successful
Feb 10 09:20:44 pve2 pmxcfs[1946]: [status] notice: received log
Feb 10 09:20:49 pve2 pmxcfs[1946]: [status] notice: received log
Feb 10 09:20:59 pve2 pmxcfs[1946]: [status] notice: received log
Feb 10 09:27:07 pve2 pmxcfs[1946]: [dcdb] notice: data verification successful
Feb 10 09:35:38 pve2 pmxcfs[1946]: [status] notice: received log

● corosync.service - Corosync Cluster Engine
Loaded: loaded (/lib/systemd/system/corosync.service; enabled; preset: enabled)
Active: active (running) since Fri 2025-02-07 19:24:54 IST; 2 days ago
Docs: man:corosync
man:corosync.conf
man:corosync_overview
Main PID: 2071 (corosync)
Tasks: 9 (limit: 308962)
Memory: 140.5M
CPU: 32min 22.989s
CGroup: /system.slice/corosync.service
└─2071 /usr/sbin/corosync -f

Feb 07 19:27:12 pve2 corosync[2071]: [KNET ] pmtud: Global data MTU changed to: 1397
Feb 07 19:28:50 pve2 corosync[2071]: [KNET ] link: Resetting MTU for link 0 because host 3 joined
Feb 07 19:28:50 pve2 corosync[2071]: [KNET ] host: host: 3 (passive) best link: 0 (pri: 1)
Feb 07 19:28:50 pve2 corosync[2071]: [QUORUM] Sync members[3]: 1 2 3
Feb 07 19:28:50 pve2 corosync[2071]: [QUORUM] Sync joined[1]: 3
Feb 07 19:28:50 pve2 corosync[2071]: [TOTEM ] A new membership (1.94) was formed. Members joined: 3
Feb 07 19:28:50 pve2 corosync[2071]: [QUORUM] Members[3]: 1 2 3
Feb 07 19:28:50 pve2 corosync[2071]: [MAIN ] Completed service synchronization, ready to provide service.
Feb 07 19:28:50 pve2 corosync[2071]: [KNET ] pmtud: PMTUD link change for host: 3 link: 0 from 469 to 1397
Feb 07 19:28:50 pve2 corosync[2071]: [KNET ] pmtud: Global data MTU changed to: 1397
 
what does "pvecm status" say?
 
what does "pvecm status" say?
root@pve2:~# pvecm status
Cluster information
-------------------
Name: SRIBCLuster
Config Version: 6
Transport: knet
Secure auth: on

Quorum information
------------------
Date: Mon Feb 10 13:51:37 2025
Quorum provider: corosync_votequorum
Nodes: 3
Node ID: 0x00000002
Ring ID: 1.94
Quorate: No

Votequorum information
----------------------
Expected votes: 6
Highest expected: 6
Total votes: 3
Quorum: 4 Activity blocked
Flags:

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 x.x.x.1
0x00000002 1 x.x.x.2 (local)
0x00000003 1 x.x.x.3
root@pve2:~#
 
and your corosync.conf ?
 
and your corosync.conf ?
root@pve2:~# cat /etc/corosync/corosync.conf
logging {
debug: off
to_syslog: yes
}

nodelist {
node {
name: pve1
nodeid: 1
quorum_votes: 1
ring0_addr: x.x.x.1
}
node {
name: pve2
nodeid: 2
quorum_votes: 1
ring0_addr: x.x.x.2
}
node {
name: pve3
nodeid: 3
quorum_votes: 1
ring0_addr: x.x.x.3
}
node {
name: pve7601
nodeid: 4
quorum_votes: 1
ring0_addr: x.x.x.11
}
node {
name: pve7602
nodeid: 5
quorum_votes: 1
ring0_addr: x.x.x.12
}
node {
name: pve7603
nodeid: 6
quorum_votes: 1
ring0_addr: x.x.x.13
}
}

quorum {
provider: corosync_votequorum
}

totem {
cluster_name: proxCLuster
config_version: 6
interface {
linknumber: 0
}
ip_version: ipv4-6
link_mode: passive
secauth: on
version: 2
}

root@pve2:~#


note: Nodes pve7601 to pve7603 the is not available OS reinstalled , need to re- add to this cluster
 
well, if half of your cluster is down, then yes, you will lose quorum.. either remove them properly or bring them back up ;)