problem with cluster configuration, please help

Lamarus

Well-Known Member
Sep 18, 2017
51
0
46
Good day ! I install three nodes of proxmox from image. And try to create a cluster. Command pvecm create proxtest on node1 was successful, type pvecm add on node2:

root@pve:~# pvecm add 10.22.16.50
The authenticity of host '10.22.16.50 (10.22.16.50)' can't be established.
ECDSA key fingerprint is 1c:e8:06:20:76:4d:a0:89:f1:22:92:81:1f:af:b2:1b.
Are you sure you want to continue connecting (yes/no)? yes
root@10.22.16.50's password:
node pve already defined
copy corosync auth key
stopping pve-cluster service
Stopping pve cluster filesystem: pve-cluster.
backup old database
Starting pve cluster filesystem : pve-cluster.
Starting cluster:
Checking if cluster has been disabled at boot... [ OK ]
Checking Network Manager... [ OK ]
Global setup... [ OK ]
Loading kernel modules... [ OK ]
Mounting configfs... [ OK ]
Starting cman... [ OK ]
Waiting for quorum... [ OK ]
Starting fenced... [ OK ]
Starting dlm_controld... [ OK ]
Tuning DLM kernel config... [ OK ]
Unfencing self... [ OK ]
waiting for quorum...

At this moment i type pvecm add ... command on node3 and get "unable to copy ssh ID" What i doing wrong ?

pvecm status on node1:
Code:
root@pve:~# pvecm status
Version: 6.2.0
Config Version: 1
Cluster Name: proxtest
Cluster Id: 28782
Cluster Member: Yes
Cluster Generation: 8
Membership state: Cluster-Member
Nodes: 1
Expected votes: 1
Total votes: 1
Node votes: 1
Quorum: 1 
Active subsystems: 5
Flags: 
Ports Bound: 0 
Node name: pve
Node ID: 1
Multicast addresses: 239.192.112.222 
Node addresses: 10.22.16.50

pvecm status on node2:
Code:
root@pve:~# pvecm status
Version: 6.2.0
Config Version: 1
Cluster Name: proxtest
Cluster Id: 28782
Cluster Member: Yes
Cluster Generation: 8
Membership state: Cluster-Member
Nodes: 1
Expected votes: 1
Total votes: 1
Node votes: 1
Quorum: 1 
Active subsystems: 5
Flags: 
Ports Bound: 0 
Node name: pve
Node ID: 1
Multicast addresses: 239.192.112.222 
Node addresses: 10.22.16.50
 
Your nodes all seem to have the same hostname.
 
You can change the hostname however you like. Restarting the network should suffice, but rebooting will definitely work.
Edit: Don't forget to adapt /etc/hosts accordingly.
 
Last edited:
If you want people to be able to help you need to give us more information. Exact error message, exact order of commands on all involved nodes. And the exact commands, of course.
 
If you want people to be able to help you need to give us more information. Exact error message, exact order of commands on all involved nodes. And the exact commands, of course.
There is no quorum between node1 and node2 i think. That's reason for ssh key from note 3 does not add to node1. What log or status do i need to post here ?
 
For starters, the contents of your /etc/hosts file.
Code:
127.0.0.1 localhost.localdomain localhost
127.0.0.1 localhost.localdomain localhost
109.229.162.50 pve.1d pve pvelocalhost
109.229.162.51 pve.node2
109.229.162.52 pve.node3

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

similar on all nodes.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!