i've restarted (stop/start) pve-cluster and corosync on the whole cluster.
permissions on /var/lib/pve-cluster/config.db are 0600.
/etc/pve is accessible again and cluster operational.
Since we're operational now.
For consistency we've decided todo a fresh install of the entire cluster (1by1)...
Update: Cluster of 6 nodes, /etc/pve is not accessible on some nodes (3) (ls /etc/pve just hangs) so that why lots of commands just hang.
node1 /etc/pve hang.
node2 /etc/pve works
node3 /etc/pve hang
node4 /etc/pve hang
node5 /etc/pve works
node6 /etc/pve works
All nodes say they have quorum...
(0)#: pvecm status
Quorum information
------------------
Date: Sat May 11 11:41:01 2019
Quorum provider: corosync_votequorum
Nodes: 6
Node ID: 0x00000001
Ring ID: 1/78560
Quorate: Yes
Votequorum information
----------------------
Expected...
Hi,
We have a cluster of 6 Proxmox nodes running various versions of Proxmox 5.2 and 5.3.
After upgrading one of the nodes to 5.4 we have multiple problems in the entire cluster:
- vms don't start correctly:
May 10 15:50:01 rwb070 pvedaemon[3101]: <root@pam> starting task...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.