Adding node crashes cluster

bensode

Member
Jan 9, 2019
61
4
13
54
Harrisburg, PA
Proxmox 6.0-7 5 nodes. When I added the 6th node the entire cluster goes down, gui becomes unresponsive on all nodes and unable to log into the newly added node at all from UI. Shelled into each of the nodes and run pvecm status with this result. I've restarted corosync on each nodes and waited a few moments but it doesn't sync up and I lose connectivity to most if not all VMs already on the cluster until I shut down the newly added node.

Code:
Quorum information
------------------
Date:             Mon Sep 30 08:57:11 2019
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000001
Ring ID:          1/11680
Quorate:          No

Votequorum information
----------------------
Expected votes:   6
Highest expected: 6
Total votes:      1
Quorum:           4 Activity blocked
Flags:           

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 10.0.3.145 (local)
 
I'd like to add that it displays in the GUI but not in the node list. How can I remove the node properly now to rebuild it?

1569851261143.png
root@prdssdpve01:/etc/init.d# pvecm nodes

Membership information
----------------------
Nodeid Votes Name
1 1 prdssdpve01 (local)
2 1 prdssdpve02
3 1 prdssdpve03
4 1 prdssdpve04
5 1 prdssdpve05
root@prdssdpve01:/etc/init.d# pvecm status
Quorum information
------------------
Date: Mon Sep 30 09:46:22 2019
Quorum provider: corosync_votequorum
Nodes: 5
Node ID: 0x00000001
Ring ID: 1/11772
Quorate: Yes

Votequorum information
----------------------
Expected votes: 6
Highest expected: 6
Total votes: 5
Quorum: 4
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 10.0.3.145 (local)
0x00000002 1 10.0.3.146
0x00000003 1 10.0.3.147
0x00000004 1 10.0.3.148
0x00000005 1 10.0.3.149
root@prdssdpve01:/etc/init.d#