Trying to add another node to a 4x node test cluster I got an issue like this:
root@node6:¨#> pvecm add node1
The authenticity of host 'node1 (xx.xx.xx.xx)' can't be established.
ECDSA key fingerprint is da:a7:df:e7:ff:8f:0f:1a:82:82:1b:e1:e6:49:3d:30.
Are you sure you want to continue connecting (yes/no)? yes
copy corosync auth key
stopping pve-cluster service
Stopping pve cluster filesystem: pve-cluster.
backup old database
Starting pve cluster filesystem : pve-cluster.
Starting cluster:
Checking if cluster has been disabled at boot... [ OK ]
Checking Network Manager... [ OK ]
Global setup... [ OK ]
Loading kernel modules... [ OK ]
Mounting configfs... [ OK ]
Starting cman... [ OK ]
Waiting for quorum... Timed-out waiting for cluster
[FAILED]
waiting for quorum...
node address seems to say localhost...
root@node6:~# pvecm status
Version: 6.2.0
Config Version: 8
Cluster Name: sprawlcl
Cluster Id: 28778
Cluster Member: Yes
Cluster Generation: 12
Membership state: Cluster-Member
Nodes: 1
Expected votes: 5
Total votes: 1
Node votes: 1
Quorum: 3 Activity blocked
Active subsystems: 2
Flags:
Ports Bound: 0
Node name: node6
Node ID: 5
Multicast addresses: 239.192.112.218
Node addresses: 127.0.0.1
Following I had no issue adding another node7, which now says:
root@node7:/# pvecm status
Version: 6.2.0
Config Version: 11
Cluster Name: sprawlcl
Cluster Id: 28778
Cluster Member: Yes
Cluster Generation: 236
Membership state: Cluster-Member
Nodes: 5
Expected votes: 5
Total votes: 5
Node votes: 1
Quorum: 3
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: node7
Node ID: 6
Multicast addresses: 239.192.112.218
Node addresses: xx.xx.xx.xx
any hints on how to resolve are appreciated (other than just reinstall hole server
TIA!
root@node6:¨#> pvecm add node1
The authenticity of host 'node1 (xx.xx.xx.xx)' can't be established.
ECDSA key fingerprint is da:a7:df:e7:ff:8f:0f:1a:82:82:1b:e1:e6:49:3d:30.
Are you sure you want to continue connecting (yes/no)? yes
copy corosync auth key
stopping pve-cluster service
Stopping pve cluster filesystem: pve-cluster.
backup old database
Starting pve cluster filesystem : pve-cluster.
Starting cluster:
Checking if cluster has been disabled at boot... [ OK ]
Checking Network Manager... [ OK ]
Global setup... [ OK ]
Loading kernel modules... [ OK ]
Mounting configfs... [ OK ]
Starting cman... [ OK ]
Waiting for quorum... Timed-out waiting for cluster
[FAILED]
waiting for quorum...
node address seems to say localhost...
root@node6:~# pvecm status
Version: 6.2.0
Config Version: 8
Cluster Name: sprawlcl
Cluster Id: 28778
Cluster Member: Yes
Cluster Generation: 12
Membership state: Cluster-Member
Nodes: 1
Expected votes: 5
Total votes: 1
Node votes: 1
Quorum: 3 Activity blocked
Active subsystems: 2
Flags:
Ports Bound: 0
Node name: node6
Node ID: 5
Multicast addresses: 239.192.112.218
Node addresses: 127.0.0.1
Following I had no issue adding another node7, which now says:
root@node7:/# pvecm status
Version: 6.2.0
Config Version: 11
Cluster Name: sprawlcl
Cluster Id: 28778
Cluster Member: Yes
Cluster Generation: 236
Membership state: Cluster-Member
Nodes: 5
Expected votes: 5
Total votes: 5
Node votes: 1
Quorum: 3
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: node7
Node ID: 6
Multicast addresses: 239.192.112.218
Node addresses: xx.xx.xx.xx
any hints on how to resolve are appreciated (other than just reinstall hole server
TIA!