Job for corosync.service failed when adding node

Amori

Active Member
May 9, 2013
46
0
26
Hello,
I have 3 new servers with latest proxmox

Code:
proxmox-ve: 4.2-56 (running kernel: 4.4.13-1-pve)
pve-manager: 4.2-15 (running version: 4.2-15/6669ad2c)
pve-kernel-4.4.13-1-pve: 4.4.13-56
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-42
qemu-server: 4.0-83
pve-firmware: 1.1-8
libpve-common-perl: 4.0-70
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-55
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-19
pve-container: 1.0-70
pve-firewall: 2.0-29
pve-ha-manager: 1.0-32
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1

I created cluster, and was able to add the second node.
now getting errors when adding the third one

Code:
root@ss3:~# pvecm add xx.xx.34.126 --force
can't create shared ssh key database '/etc/pve/priv/authorized_keys'
copy corosync auth key
stopping pve-cluster service
backup old database
Job for corosync.service failed. See 'systemctl status corosync.service' and 'journalctl -xn' for de                    tails.
waiting for quorum...

I tried to remove the node and adding it again with --force but not working


Any advice?
 
I would look at the output from
"pvecm status" first

Markus

Thanks for replay, The output is

Code:
root@sd-98080:/home/# pvecm status
Cannot initialize CMAP service
root@sd-98080:/home/# ^C
root@sd-98080:/home/# ^C
root@sd-98080:/home/# systemctl status corosync.service
● corosync.service - Corosync Cluster Engine
   Loaded: loaded (/lib/systemd/system/corosync.service; enabled)
   Active: failed (Result: exit-code) since Sun 2016-07-03 13:30:58 CEST; 1h 35min ago
  Process: 1256 ExecStart=/usr/share/corosync/corosync start (code=exited, status=1/FAILURE)

Jul 03 13:29:58 sd-98080 corosync[1263]: [QB    ] server name: cpg
Jul 03 13:29:58 sd-98080 corosync[1263]: [SERV  ] Service engine loaded: corosync profile lo...[4]
Jul 03 13:29:58 sd-98080 corosync[1263]: [QUORUM] Using quorum provider corosync_votequorum
Jul 03 13:29:58 sd-98080 corosync[1263]: [QUORUM] Quorum provider: corosync_votequorum faile...ze.
Jul 03 13:29:58 sd-98080 corosync[1263]: [SERV  ] Service engine 'corosync_quorum' failed to...d!'
Jul 03 13:29:58 sd-98080 corosync[1263]: [MAIN  ] Corosync Cluster Engine exiting with statu...56.
Jul 03 13:30:58 sd-98080 corosync[1256]: Starting Corosync Cluster Engine (corosync): [FAILED]
Jul 03 13:30:58 sd-98080 systemd[1]: corosync.service: control process exited, code=exited ...us=1
Jul 03 13:30:58 sd-98080 systemd[1]: Failed to start Corosync Cluster Engine.
Jul 03 13:30:58 sd-98080 systemd[1]: Unit corosync.service entered failed state.
Hint: Some lines were ellipsized, use -l to show in full.


On the cluster node
Code:
Quorum information
------------------
Date:             Sun Jul  3 15:32:58 2016
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          0x00000001
Ring ID:          40
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      2
Quorum:           2
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 xx.xx.34.126 (local)
0x00000002          1 xx.xx.34.162
 
Last edited:
I have the same problem ... any solution?
proxmox-42_01.png
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!