unable to copy ssh ID

bd5hty

Active Member
Dec 27, 2011
99
0
26
China;Hangzhou
Cluster add node when prompted:unable to copy ssh ID。

Code:
root@prox-0-160:~# pvecm add 192.168.0.161
The authenticity of host '192.168.0.161 (192.168.0.161)' can't be established.
RSA key fingerprint is a9:12:08:7e:01:9c:40:48:f3:9a:20:22:3e:49:0b:a7.
Are you sure you want to continue connecting (yes/no)? yes
root@192.168.0.161's password: 
unable to copy ssh ID
I'm sorry, I made a mistake, it should be 192.168.0.161
 
Last edited:
What is the output of

# ls -l /etc/pve

one node 192.168.0.160?
Three nodes:192.168.0.160-161-162
Code:
root@prox-0-160:~# ls -l /etc/pve
total 2
-rw-r----- 1 root www-data  451 Mar  4 06:14 authkey.pub
lrwxr-x--- 1 root www-data    0 Jan  1  1970 local -> nodes/prox-0-160
drwxr-x--- 2 root www-data    0 Mar  4 06:14 nodes
lrwxr-x--- 1 root www-data    0 Jan  1  1970 openvz -> nodes/prox-0-160/openvz
drwx------ 2 root www-data    0 Mar  4 06:14 priv
-rw-r----- 1 root www-data 1533 Mar  4 06:14 pve-root-ca.pem
-rw-r----- 1 root www-data 1679 Mar  4 06:14 pve-www.key
lrwxr-x--- 1 root www-data    0 Jan  1  1970 qemu-server -> nodes/prox-0-160/qemu-server
-rw-r----- 1 root www-data  119 Mar  4 06:14 vzdump.cron
root@prox-0-160:~# pvecm add 192.168.0.161
root@192.168.0.161's password: 
unable to copy ssh ID
Code:
root@prox-0-161:~# ls -l /etc/pve
total 4
-r--r----- 1 root www-data  451 Feb 13 16:18 authkey.pub
-r--r----- 1 root www-data  296 Mar  4 05:52 cluster.conf
-r--r----- 1 root www-data  351 Mar  4 05:52 cluster.conf.old
lr-xr-x--- 1 root www-data    0 Jan  1  1970 local -> nodes/prox-0-161
dr-xr-x--- 2 root www-data    0 Feb 13 16:18 nodes
lr-xr-x--- 1 root www-data    0 Jan  1  1970 openvz -> nodes/prox-0-161/openvz
dr-x------ 2 root www-data    0 Feb 13 16:18 priv
-r--r----- 1 root www-data 1533 Feb 13 16:18 pve-root-ca.pem
-r--r----- 1 root www-data 1679 Feb 13 16:18 pve-www.key
lr-xr-x--- 1 root www-data    0 Jan  1  1970 qemu-server -> nodes/prox-0-161/qemu-server
-r--r----- 1 root www-data   72 Feb 24 13:52 storage.cfg
-r--r----- 1 root www-data  146 Mar  2 05:00 user.cfg
-r--r----- 1 root www-data  119 Feb 13 16:18 vzdump.cron
Code:
root@prox-0-162:~# ls -l /etc/pve
total 4
-r--r----- 1 root www-data  451 Feb 13 16:18 authkey.pub
-r--r----- 1 root www-data  296 Mar  4 05:52 cluster.conf
-r--r----- 1 root www-data  351 Mar  4 05:52 cluster.conf.old
lr-xr-x--- 1 root www-data    0 Jan  1  1970 local -> nodes/prox-0-162
dr-xr-x--- 2 root www-data    0 Feb 13 16:18 nodes
lr-xr-x--- 1 root www-data    0 Jan  1  1970 openvz -> nodes/prox-0-162/openvz
dr-x------ 2 root www-data    0 Feb 13 16:18 priv
-r--r----- 1 root www-data 1533 Feb 13 16:18 pve-root-ca.pem
-r--r----- 1 root www-data 1679 Feb 13 16:18 pve-www.key
lr-xr-x--- 1 root www-data    0 Jan  1  1970 qemu-server -> nodes/prox-0-162/qemu-server
-r--r----- 1 root www-data   72 Feb 24 13:52 storage.cfg
-r--r----- 1 root www-data  146 Mar  2 05:00 user.cfg
-r--r----- 1 root www-data  119 Feb 13 16:18 vzdump.cron
 
As follows:
1,192.168.0.161 create the cluster.
2,192.168.0.162 pvecm add 192.168.0.161 OK.
3,192.168.0.160 pvecm add 192.168.0.161 OK.
4,192.168.0.160 upgrade - reboot - apache error.
5,182.168.0.161 execution pvecm delnode 192.168.0.160 OK.
6,192.168.0.160 reinstall the system, add the cluster error.
 
You need to make sure that your cluster is 'quorate' - Else the filesystem is switched to read-only mode.

# pvecm status

give you details about votes/quorum
 
You need to make sure that your cluster is 'quorate' - Else the filesystem is switched to read-only mode.

# pvecm status

give you details about votes/quorum
node 192.168.0.161 Start service CMan error
Code:
Starting cluster: 
   Checking if cluster has been disabled at boot... [  OK  ]
   Checking Network Manager... [  OK  ]
   Global setup... [  OK  ]
   Loading kernel modules... [  OK  ]
   Mounting configfs... [  OK  ]
   Starting cman... Cannot find node name in cluster.conf
Unable to get the configuration
Cannot find node name in cluster.conf
cman_tool: corosync daemon didn't start Check cluster logs for details
[FAILED]
TASK ERROR: command '/etc/init.d/cman start' failed: exit code 1
node 192.168.0.162
Code:
root@prox-0-162:~# pvecm status
Version: 6.2.0
Config Version: 4
Cluster Name: prox-1
Cluster Id: 28623
Cluster Member: Yes
Cluster Generation: 80
Membership state: Cluster-Member
Nodes: 1
Expected votes: 2
Total votes: 1
Node votes: 1
Quorum: 2 Activity blocked
Active subsystems: 1
Flags: 
Ports Bound: 0  
Node name: prox-0-162
Node ID: 2
Multicast addresses: 239.192.111.63 
Node addresses: 192.168.0.162
node 192.168.0.160 The new system
 
node 192.168.0.161 Start service CMan error
Code:
   Starting cman... Cannot find node name in cluster.conf
Unable to get the configuration
Cannot find node name in cluster.conf
cman_tool: corosync daemon didn't start Check cluster logs for details

Wnat is the output of

# cat /etc/hostname

and

# cat /etc/pve/cluster.conf

on that node
 
Wnat is the output of

# cat /etc/hostname

and

# cat /etc/pve/cluster.conf

on that node
Code:
root@prox-0-161:~# cat /etc/hostname
prox-0-161
root@prox-0-161:~# cat /etc/pve/cluster.conf
<?xml version="1.0"?>
<cluster name="prox-1" config_version="4">


  <cman keyfile="/var/lib/pve-cluster/corosync.authkey">
  </cman>


  <clusternodes>
  
  <clusternode name="prox-0-162" votes="1" nodeid="2"/><clusternode name="prox-0-160" votes="1" nodeid="3"/></clusternodes>


</cluster>
 
OK, so tat was the node where the add command failed (so it is clearly not part of the cluster)?

Please fix quorum on you cluster first, the run the add command with the force flag:

# pvecm add 192.168.0.160 --force
 
OK, so tat was the node where the add command failed (so it is clearly not part of the cluster)?

Please fix quorum on you cluster first, the run the add command with the force flag:

# pvecm add 192.168.0.160 --force
I'm sorry, is still an error:unable to copy ssh ID.I could root password PM to you, you help me find?
 
Last edited:
Manually modify the invalid

Code:
root@prox-0-162:/# cat /etc/cluster/cluster.conf 
<?xml version="1.0"?>
<cluster name="prox-1" config_version="4">


  <cman keyfile="/var/lib/pve-cluster/corosync.authkey">
  </cman>


  <clusternodes>
  <clusternode name="prox-0-161" votes="1" nodeid="1"/> 
  <clusternode name="prox-0-162" votes="1" nodeid="2"/><clusternode name="prox-0-160" votes="1" nodeid="3"/></clusternodes>


</cluster>
root@prox-0-162:/# /etc/init.d/pve-cluster start
Starting pve cluster filesystem : pve-cluster[dcdb] notice: wrote new cluster config '/etc/cluster/cluster.conf'
[dcdb] crit: cman_tool version failed with exit code 1#010
.
root@prox-0-162:/# cat /etc/cluster/cluster.conf 
<?xml version="1.0"?>
<cluster name="prox-1" config_version="4">


  <cman keyfile="/var/lib/pve-cluster/corosync.authkey">
  </cman>


  <clusternodes>
  
  <clusternode name="prox-0-162" votes="1" nodeid="2"/><clusternode name="prox-0-160" votes="1" nodeid="3"/></clusternodes>


</cluster>
 
Hi,

I had the same error. In my case there was a ssh problem. I set sshd on both nodes to allow only logins with keys so when running the command on node2
pvecm add node1

was problematic because node2 could not ssh to node1 (no keys). So, just make sure first that you are able to ssh as root from and to all nodes.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!