cluster problem

procop

New Member
Mar 16, 2021
5
0
1
41
Was trying to move corosync to dedicated 1g network, but now i have 2-3 clusters running in parallel, and on both cluster /etc/pve read only now.


Cluster A
vms11:~# pvecm status Cluster information ------------------- Name: cluster01 Config Version: 8 Transport: knet Secure auth: on Quorum information ------------------ Date: Thu Jul 22 14:40:43 2021 Quorum provider: corosync_votequorum Nodes: 2 Node ID: 0x00000003 Ring ID: 3.ba39 Quorate: No Votequorum information ---------------------- Expected votes: 5 Highest expected: 5 Total votes: 2 Quorum: 3 Activity blocked Flags: Membership information ---------------------- Nodeid Votes Name 0x00000003 1 172.16.254.111 (local) 0x00000005 1 172.16.1.20



vms11:~# cat /etc/pve/corosync.conf logging { debug: off to_syslog: yes } nodelist { node { name: vms07 nodeid: 5 quorum_votes: 1 ring0_addr: 172.16.1.20 } node { name: vms08 nodeid: 4 quorum_votes: 1 ring0_addr: 172.16.1.21 } node { name: vms09 nodeid: 1 quorum_votes: 1 ring0_addr: 172.16.1.82 } node { name: vms10 nodeid: 2 quorum_votes: 1 ring0_addr: 172.16.1.83 } node { name: vms11 nodeid: 3 quorum_votes: 1 ring0_addr: 172.16.254.111 } } quorum { provider: corosync_votequorum } totem { cluster_name: cluster01 config_version: 8 interface { linknumber: 0 } ip_version: ipv4-6 link_mode: passive secauth: on version: 2 }

Cluster B:
vms09:~# pvecm status Cluster information ------------------- Name: cluster01 Config Version: 8 Transport: knetSecure auth: on Quorum information ------------------ Date: Thu Jul 22 14:43:29 2021 Quorum provider: corosync_votequorum Nodes: 3 Node ID: 0x00000001Ring ID: 1.fd4dQuorate: Yes Votequorum information ----------------------Expected votes: 5Highest expected: 5 Total votes: 3 Quorum: 3 Flags: Quorate Membership information ---------------------- Nodeid Votes Name 0x00000001 1 172.16.1.82 (local) 0x00000002 1 172.16.1.83 0x00000004 1 172.16.1.21

vms09:~# cat /etc/pve/corosync.conf logging { debug: off to_syslog: yes } nodelist { node { name: vms07 nodeid: 5 quorum_votes: 1 ring0_addr: 172.16.1.20 } node { name: vms08 nodeid: 4 quorum_votes: 1 ring0_addr: 172.16.1.21 } node { name: vms09 nodeid: 1 quorum_votes: 1 ring0_addr: 172.16.1.82 } node { name: vms10 nodeid: 2 quorum_votes: 1 ring0_addr: 172.16.1.83 } node { name: vms11 nodeid: 3 quorum_votes: 1 ring0_addr: 172.16.254.111 } }quorum { provider: corosync_votequorum } totem { cluster_name: cluster01 config_version: 8 interface { linknumber: 0 } ip_version: ipv4-6 link_mode: passive secauth: on version: 2 }
 
Check the Syslog for Corosync errors, also check for the service status systemctl status corosync.service
 
i have tried many things without success, at the end i deleted node vms11 from the cluster without reinstalling according documentation and deleted /etc/pve/nodes/* on the node vms11 and added it back to the cluster. Likely for me it went smooth.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!