Can't add node to cluster. [Solved]
I'm beta testing Proxmox VE 2.0.
I've build up a two node cluster (with VMFO2 and VMFO3).
I've upgraded two nodes to the latest release and everything works fine.
I had a new fresh install to a third server (VMFO1) and when I try to add the server to the cluster I've got this:
Before you ask, i've checked the mcast connectivity of the switch and everything is fine on that front. The strange thing is that the two working nodes of the cluster says that the cluster in another multicast group!
As you can see, also the cluster ID differ from the working nodes version...
All the three nodes are at the latest version. What I can do to fix this? I'm fighting with this issue by three days.
Thanks in advance
I'm beta testing Proxmox VE 2.0.
I've build up a two node cluster (with VMFO2 and VMFO3).
I've upgraded two nodes to the latest release and everything works fine.
I had a new fresh install to a third server (VMFO1) and when I try to add the server to the cluster I've got this:
Code:
root@VMF01:~# pvecm add VMFO2
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
fd:f6:70:44:8b:5e:19:f4:c5:7b:07:8b:07:77:ff:69 root@VMF01
The key's randomart image is:
+--[ RSA 2048]----+
| ...|
| ..o.+|
| +o++|
| . .oo=+|
| S . ..= =|
| o o E.|
| = o |
| . + |
| . |
+-----------------+
The authenticity of host 'vmfo2 (X.X.X.102)' can't be established.
RSA key fingerprint is b5:1e:e0:89:e3:18:e9:60:2b:04:cd:1f:cd:f3:af:de.
Are you sure you want to continue connecting (yes/no)? yes
root@vmfo2's password:
copy corosync auth key
stopping pve-cluster service
Stopping pve cluster filesystem: pve-cluster.
backup old database
Starting pve cluster filesystem : pve-cluster.
Starting cluster:
Checking if cluster has been disabled at boot... [ OK ]
Checking Network Manager... [ OK ]
Global setup... [ OK ]
Loading kernel modules... [ OK ]
Mounting configfs... [ OK ]
Starting cman... [ OK ]
Waiting for quorum... Timed-out waiting for cluster
[FAILED]
cluster not ready - no quorum?
Before you ask, i've checked the mcast connectivity of the switch and everything is fine on that front. The strange thing is that the two working nodes of the cluster says that the cluster in another multicast group!
Code:
root@VMFO3:~# pvecm status
Version: 6.2.0
Config Version: 15
Cluster Name: ClusterFO
Cluster Id: 45483
Cluster Member: Yes
Cluster Generation: 2504
Membership state: Cluster-Member
Nodes: 2
Expected votes: 2
Total votes: 2
Node votes: 1
Quorum: 2
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: VMFO3
Node ID: 1
Multicast addresses: 239.192.177.93
Node addresses: X.X.X.103
root@VMFO3:~# pveversion -v
pve-manager: 2.0-30 (pve-manager/2.0/af79261b)
running kernel: 2.6.32-5-amd64
proxmox-ve-2.6.32: 2.0-60
pve-kernel-2.6.32-6-pve: 2.6.32-55
pve-kernel-2.6.32-7-pve: 2.6.32-60
lvm2: 2.02.88-2pve1
clvm: 2.02.88-2pve1
corosync-pve: 1.4.1-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.8-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.7-1
pve-cluster: 1.0-22
qemu-server: 2.0-18
pve-firmware: 1.0-15
libpve-common-perl: 1.0-14
libpve-access-control: 1.0-12
libpve-storage-perl: 2.0-11
vncterm: 1.0-2
vzctl: 3.0.30-2pve1
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.0-3
ksm-control-daemon: 1.1-1
Code:
root@VMFO2:~# pvecm status
Version: 6.2.0
Config Version: 15
Cluster Name: ClusterFO
Cluster Id: 45483
Cluster Member: Yes
Cluster Generation: 2504
Membership state: Cluster-Member
Nodes: 2
Expected votes: 2
Total votes: 2
Node votes: 1
Quorum: 2
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: VMFO2
Node ID: 2
Multicast addresses: 239.192.177.93
Node addresses: X.X.X.102
root@VMFO2:~# pveversion -v
pve-manager: 2.0-30 (pve-manager/2.0/af79261b)
running kernel: 2.6.32-5-amd64
proxmox-ve-2.6.32: 2.0-60
pve-kernel-2.6.32-6-pve: 2.6.32-55
pve-kernel-2.6.32-7-pve: 2.6.32-60
lvm2: 2.02.88-2pve1
clvm: 2.02.88-2pve1
corosync-pve: 1.4.1-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.8-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.7-1
pve-cluster: 1.0-22
qemu-server: 2.0-18
pve-firmware: 1.0-15
libpve-common-perl: 1.0-14
libpve-access-control: 1.0-12
libpve-storage-perl: 2.0-11
vncterm: 1.0-2
vzctl: 3.0.30-2pve1
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.0-3
ksm-control-daemon: 1.1-1
Code:
root@VMF01:~# pvecm status
Version: 6.2.0
Config Version: 15
Cluster Name: CLUSTERFO
Cluster Id: 37419
Cluster Member: Yes
Cluster Generation: 4
Membership state: Cluster-Member
Nodes: 1
Expected votes: 3
Total votes: 1
Node votes: 1
Quorum: 2 Activity blocked
Active subsystems: 1
Flags:
Ports Bound: 0
Node name: VMF01
Node ID: 3
Multicast addresses: 239.192.146.189
Node addresses: X.X.X.91
root@VMF01:~# pveversion -v
pve-manager: 2.0-30 (pve-manager/2.0/af79261b)
running kernel: 2.6.32-7-pve
proxmox-ve-2.6.32: 2.0-60
pve-kernel-2.6.32-7-pve: 2.6.32-60
lvm2: 2.02.88-2pve1
clvm: 2.02.88-2pve1
corosync-pve: 1.4.1-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.8-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.7-1
pve-cluster: 1.0-22
qemu-server: 2.0-18
pve-firmware: 1.0-15
libpve-common-perl: 1.0-14
libpve-access-control: 1.0-12
libpve-storage-perl: 2.0-11
vncterm: 1.0-2
vzctl: 3.0.30-2pve1
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.0-3
ksm-control-daemon: 1.1-1
As you can see, also the cluster ID differ from the working nodes version...
All the three nodes are at the latest version. What I can do to fix this? I'm fighting with this issue by three days.
Thanks in advance
Last edited: