Cluster not quorate - extending auth key lifetime!

HKRH

New Member
Oct 4, 2019
3
0
1
52
Hello together,

we have a 2-node cluster without HA, named pve1 and pve2. pve1 replicates VMs to pve2. Both System run well until last week.
Both Systems run with Proxmox VE 6.0-2, PVE-Manager 6.0-4, PVE-Kernel 6.0-5

Now we get the following error messages

on pve1:
Nov 29 15:31:06 pve1 corosync[2244]: [TOTEM ] A new membership (1:4014312) was formed. Members
Nov 29 15:31:06 pve1 corosync[2244]: [CPG ] downlist left_list: 0 received
Nov 29 15:31:06 pve1 corosync[2244]: [QUORUM] Members[1]: 1
Nov 29 15:31:06 pve1 corosync[2244]: [MAIN ] Completed service synchronization, ready to provide service.
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:07 pve1 pvesr[5238]: trying to acquire cfs lock 'file-replication_cfg' ...
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:07 pve1 pveproxy[2823]: Cluster not quorate - extending auth key lifetime!

on pve2:
Nov 29 15:31:24 pve2 pveproxy[2058]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:25 pve2 pveproxy[2058]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:25 pve2 pveproxy[2058]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:26 pve2 pveproxy[2058]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:26 pve2 pveproxy[2058]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:27 pve2 pveproxy[2058]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:27 pve2 pveproxy[2058]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:28 pve2 pveproxy[2058]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:29 pve2 pveproxy[2058]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:29 pve2 pveproxy[2058]: Cluster not quorate - extending auth key lifetime!
Nov 29 15:31:29 pve2 pveproxy[2058]: Cluster not quorate - extending auth key lifetime!

pve1 is marked green in the Tree, pve2 is marked with an red cross.

when running pvecm status we get this reply on both machines:

root@pve2:~# pvecm status
Quorum information
------------------
Date: Fri Nov 29 15:35:07 2019
Quorum provider: corosync_votequorum
Nodes: 1
Node ID: 0x00000002
Ring ID: 2/332
Quorate: No

Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 1
Quorum: 2 Activity blocked
Flags:

Membership information
----------------------
Nodeid Votes Name
0x00000002 1 192.168.9.2 (local)


Any idea what happend?

Thank you very much for any help.

Greetings
Rudi
 
Last edited:
I'm having the same issue.


Code:
Nov 29 22:28:09 pve-vm09 systemd[1]: pvesr.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- The unit pvesr.service has entered the 'failed' state with result 'exit-code'.
Nov 29 22:28:09 pve-vm09 systemd[1]: Failed to start Proxmox VE replication runner.
-- Subject: A start job for unit pvesr.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit pvesr.service has finished with a failure.
--
-- The job identifier is 7625264 and the job result is failed.
Nov 29 22:28:14 pve-vm09 pveproxy[24592]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:14 pve-vm09 pvedaemon[63575]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:14 pve-vm09 pveproxy[24594]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:14 pve-vm09 pveproxy[24593]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:26 pve-vm09 pveproxy[24593]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:26 pve-vm09 pvedaemon[61568]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:26 pve-vm09 pveproxy[24594]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:26 pve-vm09 pveproxy[24593]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:26 pve-vm09 pveproxy[24593]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:26 pve-vm09 pvedaemon[62769]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:34 pve-vm09 pveproxy[24594]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:34 pve-vm09 pvedaemon[63575]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:34 pve-vm09 pveproxy[24593]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:34 pve-vm09 pveproxy[24592]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:41 pve-vm09 pveproxy[24592]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:42 pve-vm09 pveproxy[24593]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:42 pve-vm09 pvedaemon[62769]: Cluster not quorate - extending auth key lifetime!
Nov 29 22:28:42 pve-vm09 pveproxy[24594]: Cluster not quorate - extending auth key lifetime!
lines 1603-1643/1643 (END)


And because of this (I think), backups are failing.


Code:
INFO: Starting Backup of VM 100 (qemu)
INFO: Backup started at 2019-11-29 22:00:02
INFO: status = running
INFO: unable to open file '/etc/pve/nodes/pve-vm08/qemu-server/100.conf.tmp.1837' - Permission denied
INFO: update VM 100: -lock backup
ERROR: Backup of VM 100 failed - command 'qm set 100 --lock backup' failed: exit code 2
INFO: Failed at 2019-11-29 22:00:03
INFO: Starting Backup of VM 101 (qemu)
INFO: Backup started at 2019-11-29 22:00:04
INFO: status = running
INFO: unable to open file '/etc/pve/nodes/pve-vm08/qemu-server/101.conf.tmp.1841' - Permission denied
INFO: update VM 101: -lock backup
ERROR: Backup of VM 101 failed - command 'qm set 101 --lock backup' failed: exit code 2
INFO: Failed at 2019-11-29 22:00:05

Please let me know if you need more information.
 
please provide the output of systemctl status pve-cluster.service corosync.service on both nodes, as well as pveversion -v output.
 
Here are the outputs:

pve1-versions:
root@pve1:~# pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve)
pve-manager: 6.0-4 (running version: 6.0-4/2a719255)
pve-kernel-5.0: 6.0-5
pve-kernel-helper: 6.0-5
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-2
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-5
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-61
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-5
pve-cluster: 6.0-4
pve-container: 3.0-3
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-5
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-3
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-5
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve1

pve2-versions (identical):
root@pve2:~# pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve)
pve-manager: 6.0-4 (running version: 6.0-4/2a719255)
pve-kernel-5.0: 6.0-5
pve-kernel-helper: 6.0-5
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-2
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-5
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-61
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-5
pve-cluster: 6.0-4
pve-container: 3.0-3
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-5
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-3
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-5
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve1

pve1-status:
root@pve1:~# systemctl status pve-cluster.service corosync.service
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2019-11-03 14:08:08 CET; 4 weeks 0 days ago
Process: 2066 ExecStart=/usr/bin/pmxcfs (code=exited, status=0/SUCCESS)
Process: 2239 ExecStartPost=/usr/bin/pvecm updatecerts --silent (code=exited, status=0/SUCCESS)
Main PID: 2095 (pmxcfs)
Tasks: 13 (limit: 4915)
Memory: 52.6M
CGroup: /system.slice/pve-cluster.service
└─2095 /usr/bin/pmxcfs

Dec 01 23:08:08 pve1 pmxcfs[2095]: [dcdb] notice: data verification successful
Dec 02 00:08:08 pve1 pmxcfs[2095]: [dcdb] notice: data verification successful
Dec 02 01:08:08 pve1 pmxcfs[2095]: [dcdb] notice: data verification successful
Dec 02 02:08:07 pve1 pmxcfs[2095]: [dcdb] notice: data verification successful
Dec 02 03:08:07 pve1 pmxcfs[2095]: [dcdb] notice: data verification successful
Dec 02 04:08:07 pve1 pmxcfs[2095]: [dcdb] notice: data verification successful
Dec 02 05:08:07 pve1 pmxcfs[2095]: [dcdb] notice: data verification successful
Dec 02 06:08:08 pve1 pmxcfs[2095]: [dcdb] notice: data verification successful
Dec 02 07:08:07 pve1 pmxcfs[2095]: [dcdb] notice: data verification successful
Dec 02 08:08:08 pve1 pmxcfs[2095]: [dcdb] notice: data verification successful

● corosync.service - Corosync Cluster Engine
Loaded: loaded (/lib/systemd/system/corosync.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2019-11-03 14:08:08 CET; 4 weeks 0 days ago
Docs: man:corosync
man:corosync.conf
man:corosync_overview
Main PID: 2244 (corosync)
Tasks: 9 (limit: 4915)
Memory: 191.4M
CGroup: /system.slice/corosync.service
└─2244 /usr/sbin/corosync -f

Dec 02 08:55:56 pve1 corosync[2244]: [QUORUM] Members[1]: 1
Dec 02 08:55:56 pve1 corosync[2244]: [MAIN ] Completed service synchronization, ready to provide service.
Dec 02 08:55:58 pve1 corosync[2244]: [TOTEM ] A new membership (1:4670904) was formed. Members
Dec 02 08:55:58 pve1 corosync[2244]: [CPG ] downlist left_list: 0 received
Dec 02 08:55:58 pve1 corosync[2244]: [QUORUM] Members[1]: 1
Dec 02 08:55:58 pve1 corosync[2244]: [MAIN ] Completed service synchronization, ready to provide service.
Dec 02 08:55:59 pve1 corosync[2244]: [TOTEM ] A new membership (1:4670908) was formed. Members
Dec 02 08:55:59 pve1 corosync[2244]: [CPG ] downlist left_list: 0 received
Dec 02 08:55:59 pve1 corosync[2244]: [QUORUM] Members[1]: 1
Dec 02 08:55:59 pve1 corosync[2244]: [MAIN ] Completed service synchronization, ready to provide service.

pve2-status:
root@pve2:~# systemctl status pve-cluster.service corosync.service
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2019-11-29 15:13:35 CET; 2 days ago
Process: 1824 ExecStart=/usr/bin/pmxcfs (code=exited, status=0/SUCCESS)
Process: 2006 ExecStartPost=/usr/bin/pvecm updatecerts --silent (code=exited, status=0/SUCCESS)
Main PID: 1857 (pmxcfs)
Tasks: 13 (limit: 4915)
Memory: 45.6M
CGroup: /system.slice/pve-cluster.service
└─1857 /usr/bin/pmxcfs

Dec 01 23:13:34 pve2 pmxcfs[1857]: [dcdb] notice: data verification successful
Dec 02 00:13:34 pve2 pmxcfs[1857]: [dcdb] notice: data verification successful
Dec 02 01:13:34 pve2 pmxcfs[1857]: [dcdb] notice: data verification successful
Dec 02 02:13:34 pve2 pmxcfs[1857]: [dcdb] notice: data verification successful
Dec 02 03:13:34 pve2 pmxcfs[1857]: [dcdb] notice: data verification successful
Dec 02 04:13:34 pve2 pmxcfs[1857]: [dcdb] notice: data verification successful
Dec 02 05:13:34 pve2 pmxcfs[1857]: [dcdb] notice: data verification successful
Dec 02 06:13:34 pve2 pmxcfs[1857]: [dcdb] notice: data verification successful
Dec 02 07:13:34 pve2 pmxcfs[1857]: [dcdb] notice: data verification successful
Dec 02 08:13:34 pve2 pmxcfs[1857]: [dcdb] notice: data verification successful

● corosync.service - Corosync Cluster Engine
Loaded: loaded (/lib/systemd/system/corosync.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2019-11-29 15:13:36 CET; 2 days ago
Docs: man:corosync
man:corosync.conf
man:corosync_overview
Main PID: 2008 (corosync)
Tasks: 9 (limit: 4915)
Memory: 185.6M
CGroup: /system.slice/corosync.service
└─2008 /usr/sbin/corosync -f

Nov 30 22:55:54 pve2 corosync[2008]: [KNET ] link: host: 1 link: 0 is down
Nov 30 22:55:54 pve2 corosync[2008]: [KNET ] host: host: 1 (passive) best link: 0 (pri: 1)
Nov 30 22:55:54 pve2 corosync[2008]: [KNET ] host: host: 1 has no active links
Nov 30 22:55:55 pve2 corosync[2008]: [KNET ] rx: host: 1 link: 0 is up
Nov 30 22:55:55 pve2 corosync[2008]: [KNET ] host: host: 1 (passive) best link: 0 (pri: 1)
Dec 01 12:14:14 pve2 corosync[2008]: [KNET ] link: host: 1 link: 0 is down
Dec 01 12:14:14 pve2 corosync[2008]: [KNET ] host: host: 1 (passive) best link: 0 (pri: 1)
Dec 01 12:14:14 pve2 corosync[2008]: [KNET ] host: host: 1 has no active links
Dec 01 12:14:15 pve2 corosync[2008]: [KNET ] rx: host: 1 link: 0 is up
Dec 01 12:14:15 pve2 corosync[2008]: [KNET ] host: host: 1 (passive) best link: 0 (pri: 1)

Additional Informations:
- both Systems have 2 NICs (1 NIC as Business-Link (vmbr0), 1 NIC for Cluster(vmbr1))
- Cluster-Link is a separated VLAN Network-Connection (Port-VLAN over Managed Switch without VLAN-IDs) in different Sub-Net
- Both Connections are Up and working fine
- Shell over Web-UI will not open, Error in Cluster Log (putty works fine):
end task UPID: pve1:00005C80:0ED492B9:5DE4C868:vncshell::root@pam: command '/usr/bin/termproxy 5900 --path /nodes/pve1 --perm Sys.Console -- /bin/login -f root' failed: exit code 4

Hope the Information is ok. Feel free to ask for futher information.

Thank you very much for your efforts.

Best regards,
Rudi
 
@HKRH please upgrade to the current version (the version of corosync/knet you are running had quite some bugs!), which will also restart corosync and pve-cluster and hopefully resync the cluster file system. if you still experience issues afterwards, a full log ("journalctl -u corosync -u pve-cluster --since XXX" where XXX is the time of the upgrade) might shed some light.
 
Thank you,

I will do the Update on next weekend. I will give you response if it works or not.

Best regards,
Rudi
 
Apologies for the necropost, but how do I go about updating corosync?

I'm having the same issue and wish to apply the suggested solution. Running pveversion -v reveals that I am also running corosync: 3.0.2-pve2. However, when I apt update && apt upgrade, no new version of corosync is available (or any other updates).
 
Thank you, fabian. I'm adding a subscription, then will upgrade to 6.1 from the supported repos.
 
hola, muy buenas tardes
quise ingresar un nodo al archivo corosync.conf, y en la parte de grabar, me dejó con una X uno de los nodos que ya tenía un plan de forma correcta, no sé qué hacer, ya busque varias formas de poder hacerlo y no me deja

1606251455451.png


este es el error que manda
1606251601181.png

espero me puedan ayudar
saludos
 
Hello together,

we have a 2-node cluster without HA, named pve1 and pve2. pve1 replicates VMs to pve2. Both System run well until last week.
Both Systems run with Proxmox VE 6.0-2, PVE-Manager 6.0-4, PVE-Kernel 6.0-5

Now we get the following error messages

on pve1:


on pve2:


pve1 is marked green in the Tree, pve2 is marked with an red cross.

when running pvecm status we get this reply on both machines:




Any idea what happend?

Thank you very much for any help.

Greetings
Rudi

hola, muy buenas tardes,

el día de hoy me esta pasando lo mismo

encontraste alguna solución?

saludos y muchas gracias de antemano por leernos
:D
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!