Hello,
i hav 3 nodes cluster,
this morning VPS are UP but the cluster status was down, no quorum.
i have rebooted 2 of 3 nodes ( the nodes without vps ) but not solved..
LAN is ok and hostname /etc/host is ok.
I hav also tried to reboot services on all 3 nodes:
systemctl restart pve-cluster
systemctl restart pvedaemon
systemctl restart pveproxy
systemctl restart pvestatd
But no way still no quorum:
pvecm status ( same inj all 3 nodes )
Quorum information
------------------
Date: Thu Feb 14 14:09:46 2019
Quorum provider: corosync_votequorum
Nodes: 1
Node ID: 0x00000001
Ring ID: 1/4161200
Quorate: No
Votequorum information
----------------------
Expected votes: 3
Highest expected: 3
Total votes: 1
Quorum: 2 Activity blocked
Flags:
Membership information
----------------------
Nodeid Votes Name
0x00000001 1 10.10.10.1 (local)
-------------------------------------------------------------
Pve cluster seem ok in all 3 nodes:
systemctl status corosync pve-cluster
● corosync.service - Corosync Cluster Engine
Loaded: loaded (/lib/systemd/system/corosync.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2019-02-14 13:31:04 CET; 33min ago
Docs: man:corosync
man:corosync.conf
man:corosync_overview
Main PID: 9977 (corosync)
Tasks: 2 (limit: 6144)
Memory: 46.3M
CPU: 35.491s
CGroup: /system.slice/corosync.service
└─9977 /usr/sbin/corosync -f
Feb 14 13:31:04 n1 corosync[9977]: [QUORUM] Members[1]: 1
Feb 14 13:31:04 n1 corosync[9977]: [MAIN ] Completed service synchronization, ready to provide service.
Feb 14 13:46:05 n1 corosync[9977]: notice [TOTEM ] A new membership (10.10.10.1:4158332) was formed. Members
Feb 14 13:46:05 n1 corosync[9977]: warning [CPG ] downlist left_list: 0 received
Feb 14 13:46:05 n1 corosync[9977]: notice [QUORUM] Members[1]: 1
Feb 14 13:46:05 n1 corosync[9977]: [TOTEM ] A new membership (10.10.10.1:4158332) was formed. Members
Feb 14 13:46:05 n1 corosync[9977]: notice [MAIN ] Completed service synchronization, ready to provide service.
Feb 14 13:46:05 n1 corosync[9977]: [CPG ] downlist left_list: 0 received
Feb 14 13:46:05 n1 corosync[9977]: [QUORUM] Members[1]: 1
Feb 14 13:46:05 n1 corosync[9977]: [MAIN ] Completed service synchronization, ready to provide service.
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2019-02-14 14:03:41 CET; 58s ago
Process: 1375 ExecStartPost=/usr/bin/pvecm updatecerts --silent (code=exited, status=0/SUCCESS)
Process: 1354 ExecStart=/usr/bin/pmxcfs (code=exited, status=0/SUCCESS)
Main PID: 1355 (pmxcfs)
Tasks: 5 (limit: 6144)
Memory: 38.2M
CPU: 522ms
CGroup: /system.slice/pve-cluster.service
└─1355 /usr/bin/pmxcfs
------------------------------------------------------------------------------------------------------------------
pveversion -v
proxmox-ve: 5.2-2 (running kernel: 4.15.18-7-pve)
pve-manager: 5.2-10 (running version: 5.2-10/6f892b40)
pve-kernel-4.15: 5.2-10
pve-kernel-4.15.18-7-pve: 4.15.18-27
pve-kernel-4.13.13-2-pve: 4.13.13-33
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-41
libpve-guest-common-perl: 2.0-18
libpve-http-server-perl: 2.0-11
libpve-storage-perl: 5.0-30
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-3
lxcfs: 3.0.2-2
novnc-pve: 1.0.0-2
proxmox-widget-toolkit: 1.0-20
pve-cluster: 5.0-30
pve-container: 2.0-29
pve-docs: 5.2-9
pve-firewall: 3.0-14
pve-firmware: 2.0-5
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.14.1-1
pve-qemu-kvm: 2.12.1-1
pve-xtermjs: 1.0-5
pve-zsync: 1.7-1
qemu-server: 5.0-38
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.11-pve1~bpo1
-----------------------------------------------------------------------------------------------------------------
I can't stop VM:
root@n1:~# qm shutdown 301
VM is locked (snapshot-delete)
and i can't unlock:
root@n1:~# qm unlock 301
unable to open file '/etc/pve/nodes/n1/qemu-server/301.conf.tmp.956' - Permission denied
But i can access /etc/pve:
ls -la /etc/pve
total 13
drwxr-xr-x 2 root www-data 0 Jan 1 1970 .
drwxr-xr-x 92 root root 184 Nov 7 09:46 ..
-r--r----- 1 root www-data 451 Oct 24 21:06 authkey.pub
-r--r----- 1 root www-data 859 Jan 1 1970 .clusterlog
......
Please Help !
Thanks!!
i hav 3 nodes cluster,
this morning VPS are UP but the cluster status was down, no quorum.
i have rebooted 2 of 3 nodes ( the nodes without vps ) but not solved..
LAN is ok and hostname /etc/host is ok.
I hav also tried to reboot services on all 3 nodes:
systemctl restart pve-cluster
systemctl restart pvedaemon
systemctl restart pveproxy
systemctl restart pvestatd
But no way still no quorum:
pvecm status ( same inj all 3 nodes )
Quorum information
------------------
Date: Thu Feb 14 14:09:46 2019
Quorum provider: corosync_votequorum
Nodes: 1
Node ID: 0x00000001
Ring ID: 1/4161200
Quorate: No
Votequorum information
----------------------
Expected votes: 3
Highest expected: 3
Total votes: 1
Quorum: 2 Activity blocked
Flags:
Membership information
----------------------
Nodeid Votes Name
0x00000001 1 10.10.10.1 (local)
-------------------------------------------------------------
Pve cluster seem ok in all 3 nodes:
systemctl status corosync pve-cluster
● corosync.service - Corosync Cluster Engine
Loaded: loaded (/lib/systemd/system/corosync.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2019-02-14 13:31:04 CET; 33min ago
Docs: man:corosync
man:corosync.conf
man:corosync_overview
Main PID: 9977 (corosync)
Tasks: 2 (limit: 6144)
Memory: 46.3M
CPU: 35.491s
CGroup: /system.slice/corosync.service
└─9977 /usr/sbin/corosync -f
Feb 14 13:31:04 n1 corosync[9977]: [QUORUM] Members[1]: 1
Feb 14 13:31:04 n1 corosync[9977]: [MAIN ] Completed service synchronization, ready to provide service.
Feb 14 13:46:05 n1 corosync[9977]: notice [TOTEM ] A new membership (10.10.10.1:4158332) was formed. Members
Feb 14 13:46:05 n1 corosync[9977]: warning [CPG ] downlist left_list: 0 received
Feb 14 13:46:05 n1 corosync[9977]: notice [QUORUM] Members[1]: 1
Feb 14 13:46:05 n1 corosync[9977]: [TOTEM ] A new membership (10.10.10.1:4158332) was formed. Members
Feb 14 13:46:05 n1 corosync[9977]: notice [MAIN ] Completed service synchronization, ready to provide service.
Feb 14 13:46:05 n1 corosync[9977]: [CPG ] downlist left_list: 0 received
Feb 14 13:46:05 n1 corosync[9977]: [QUORUM] Members[1]: 1
Feb 14 13:46:05 n1 corosync[9977]: [MAIN ] Completed service synchronization, ready to provide service.
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2019-02-14 14:03:41 CET; 58s ago
Process: 1375 ExecStartPost=/usr/bin/pvecm updatecerts --silent (code=exited, status=0/SUCCESS)
Process: 1354 ExecStart=/usr/bin/pmxcfs (code=exited, status=0/SUCCESS)
Main PID: 1355 (pmxcfs)
Tasks: 5 (limit: 6144)
Memory: 38.2M
CPU: 522ms
CGroup: /system.slice/pve-cluster.service
└─1355 /usr/bin/pmxcfs
------------------------------------------------------------------------------------------------------------------
pveversion -v
proxmox-ve: 5.2-2 (running kernel: 4.15.18-7-pve)
pve-manager: 5.2-10 (running version: 5.2-10/6f892b40)
pve-kernel-4.15: 5.2-10
pve-kernel-4.15.18-7-pve: 4.15.18-27
pve-kernel-4.13.13-2-pve: 4.13.13-33
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-41
libpve-guest-common-perl: 2.0-18
libpve-http-server-perl: 2.0-11
libpve-storage-perl: 5.0-30
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-3
lxcfs: 3.0.2-2
novnc-pve: 1.0.0-2
proxmox-widget-toolkit: 1.0-20
pve-cluster: 5.0-30
pve-container: 2.0-29
pve-docs: 5.2-9
pve-firewall: 3.0-14
pve-firmware: 2.0-5
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.14.1-1
pve-qemu-kvm: 2.12.1-1
pve-xtermjs: 1.0-5
pve-zsync: 1.7-1
qemu-server: 5.0-38
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.11-pve1~bpo1
-----------------------------------------------------------------------------------------------------------------
I can't stop VM:
root@n1:~# qm shutdown 301
VM is locked (snapshot-delete)
and i can't unlock:
root@n1:~# qm unlock 301
unable to open file '/etc/pve/nodes/n1/qemu-server/301.conf.tmp.956' - Permission denied
But i can access /etc/pve:
ls -la /etc/pve
total 13
drwxr-xr-x 2 root www-data 0 Jan 1 1970 .
drwxr-xr-x 92 root root 184 Nov 7 09:46 ..
-r--r----- 1 root www-data 451 Oct 24 21:06 authkey.pub
-r--r----- 1 root www-data 859 Jan 1 1970 .clusterlog
......
Please Help !
Thanks!!