Error :- corosync is already running, is this node already in a cluster?!

bpareek9694

New Member
Sep 13, 2019
20
0
1
28
Hello,

We're facing issue of connected two clusters:-

corosync is already running, is this node already in a cluster?!
* local node address: cannot use IP '192.99.144.42', it must be configured exactly once on local node!

Check if node may join a cluster failed!


How can we resolve the issue
Please Help me
 
Hi,
it is not possible to merge two clusters. Only nodes which are not currently part of any cluster and which do not hold any VMs can join a cluster.
 
but when i add node to the cluster it's shows the error
error_proxmox.png
I'm just waiting over 30 min but the process are same...

Please help me to resolve the issue.

Note:- Node and Cluster both are working in debian 9.
Both are using Proxmox ve 5.4
 
Last edited:
Could you provide the logs from both nodes:
Code:
journalctl -u corosync

The problem here might be network related, please make sure it is functioning properly.
The relevant part from the documentation for PVE 5.4:
Code:
Cluster Network
---------------

The cluster network is the core of a cluster. All messages sent over it have to
be delivered reliable to all nodes in their respective order. In {pve} this
part is done by corosync, an implementation of a high performance low overhead
high availability development toolkit. It serves our decentralized
configuration file system (`pmxcfs`).

Network Requirements
~~~~~~~~~~~~~~~~~~~~
This needs a reliable network with latencies under 2 milliseconds (LAN
performance) to work properly. While corosync can also use unicast for
communication between nodes its **highly recommended** to have a multicast
capable network. The network should not be used heavily by other members,
ideally corosync runs on its own network.
*never* share it with network where storage communicates too.

Before setting up a cluster it is good practice to check if the network is fit
for that purpose.

* Ensure that all nodes are in the same subnet. This must only be true for the
  network interfaces used for cluster communication (corosync).

* Ensure all nodes can reach each other over those interfaces, using `ping` is
  enough for a basic test.

* Ensure that multicast works in general and a high package rates. This can be
  done with the `omping` tool. The final "%loss" number should be < 1%.


omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...


* Ensure that multicast communication works over an extended period of time.
  This uncovers problems where IGMP snooping is activated on the network but
  no multicast querier is active. This test has a duration of around 10
  minutes.

omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...


Your network is not ready for clustering if any of these test fails. Recheck
your network configuration. Especially switches are notorious for having
multicast disabled by default or IGMP snooping enabled with no IGMP querier
active.

In smaller cluster its also an option to use unicast if you really cannot get
multicast to work.
 
cluster:-
Oct 16 12:33:50 cloudadmin.nexahost.live systemd[1]: Starting Corosync Cluster Engine...
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[989]: [MAIN ] Corosync Cluster Engine ('2.4.4-dirty'): started and ready to provide service.
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[989]: notice [MAIN ] Corosync Cluster Engine ('2.4.4-dirty'): started and ready to provide service.
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[989]: [MAIN ] Corosync built-in features: dbus rdma monitoring watchdog systemd xmlconf qdevices qnetd snmp pie relro bindn
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[989]: info [MAIN ] Corosync built-in features: dbus rdma monitoring watchdog systemd xmlconf qdevices qnetd snmp pie relr
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[989]: [MAIN ] parse error in config: No multicast address specified
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[989]: error [MAIN ] parse error in config: No multicast address specified
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[989]: [MAIN ] Corosync Cluster Engine exiting with status 8 at main.c:1416.
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[989]: error [MAIN ] Corosync Cluster Engine exiting with status 8 at main.c:1416.
Oct 16 12:33:50 cloudadmin.nexahost.live systemd[1]: corosync.service: Main process exited, code=exited, status=8/n/a
Oct 16 12:33:50 cloudadmin.nexahost.live systemd[1]: Failed to start Corosync Cluster Engine.
Oct 16 12:33:50 cloudadmin.nexahost.live systemd[1]: corosync.service: Unit entered failed state.
Oct 16 12:33:50 cloudadmin.nexahost.live systemd[1]: corosync.service: Failed with result 'exit-code'.
Oct 16 12:33:50 cloudadmin.nexahost.live systemd[1]: Starting Corosync Cluster Engine...
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[1027]: [MAIN ] Corosync Cluster Engine ('2.4.4-dirty'): started and ready to provide service.
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[1027]: notice [MAIN ] Corosync Cluster Engine ('2.4.4-dirty'): started and ready to provide service.
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[1027]: [MAIN ] Corosync built-in features: dbus rdma monitoring watchdog systemd xmlconf qdevices qnetd snmp pie relro bind
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[1027]: info [MAIN ] Corosync built-in features: dbus rdma monitoring watchdog systemd xmlconf qdevices qnetd snmp pie rel
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[1027]: [MAIN ] parse error in config: No multicast address specified
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[1027]: error [MAIN ] parse error in config: No multicast address specified
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[1027]: [MAIN ] Corosync Cluster Engine exiting with status 8 at main.c:1416.
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[1027]: error [MAIN ] Corosync Cluster Engine exiting with status 8 at main.c:1416.
Oct 16 12:33:50 cloudadmin.nexahost.live systemd[1]: corosync.service: Main process exited, code=exited, status=8/n/a
Oct 16 12:33:50 cloudadmin.nexahost.live systemd[1]: Failed to start Corosync Cluster Engine.
Oct 16 12:33:50 cloudadmin.nexahost.live systemd[1]: corosync.service: Unit entered failed state.
Oct 16 12:33:50 cloudadmin.nexahost.live systemd[1]: corosync.service: Failed with result 'exit-code'.
Oct 16 12:33:50 cloudadmin.nexahost.live systemd[1]: Starting Corosync Cluster Engine...
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[1065]: [MAIN ] Corosync Cluster Engine ('2.4.4-dirty'): started and ready to provide service.
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[1065]: notice [MAIN ] Corosync Cluster Engine ('2.4.4-dirty'): started and ready to provide service.
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[1065]: [MAIN ] Corosync built-in features: dbus rdma monitoring watchdog systemd xmlconf qdevices qnetd snmp pie relro bind
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[1065]: info [MAIN ] Corosync built-in features: dbus rdma monitoring watchdog systemd xmlconf qdevices qnetd snmp pie rel
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[1065]: [MAIN ] parse error in config: No multicast address specified
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[1065]: error [MAIN ] parse error in config: No multicast address specified
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[1065]: [MAIN ] Corosync Cluster Engine exiting with status 8 at main.c:1416.
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[1065]: error [MAIN ] Corosync Cluster Engine exiting with status 8 at main.c:1416.
Oct 16 12:33:50 cloudadmin.nexahost.live systemd[1]: corosync.service: Main process exited, code=exited, status=8/n/a
Oct 16 12:33:50 cloudadmin.nexahost.live systemd[1]: Failed to start Corosync Cluster Engine.
Oct 16 12:33:50 cloudadmin.nexahost.live systemd[1]: corosync.service: Unit entered failed state.
Oct 16 12:33:50 cloudadmin.nexahost.live systemd[1]: corosync.service: Failed with result 'exit-code'.
Oct 16 12:33:50 cloudadmin.nexahost.live systemd[1]: Starting Corosync Cluster Engine...
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[1101]: [MAIN ] Corosync Cluster Engine ('2.4.4-dirty'): started and ready to provide service.
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[1101]: notice [MAIN ] Corosync Cluster Engine ('2.4.4-dirty'): started and ready to provide service.



Node:-
Oct 16 12:50:53 host01.nexahost.live systemd[1]: Starting Corosync Cluster Engine...
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: [MAIN ] Corosync Cluster Engine ('2.4.4-dirt
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: notice [MAIN ] Corosync Cluster Engine ('2.4
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: [MAIN ] Corosync built-in features: dbus rdm
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: info [MAIN ] Corosync built-in features: d
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: [MAIN ] interface section bindnetaddr is use
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: warning [MAIN ] interface section bindnetaddr
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: [MAIN ] Please migrate config file to nodeli
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: warning [MAIN ] Please migrate config file to
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: notice [TOTEM ] Initializing transport (UDP/I
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: notice [TOTEM ] Initializing transmit/receive
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: [TOTEM ] Initializing transport (UDP/IP Multi
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: [TOTEM ] Initializing transmit/receive securi
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: notice [TOTEM ] The network interface is down
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: notice [SERV ] Service engine loaded: corosy
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: info [QB ] server name: cmap
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: notice [SERV ] Service engine loaded: corosy
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: info [QB ] server name: cfg
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: notice [SERV ] Service engine loaded: corosy
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: info [QB ] server name: cpg
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: notice [SERV ] Service engine loaded: corosy
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: notice [SERV ] Service engine loaded: corosy
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: warning [WD ] Watchdog not enabled by confi
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: warning [WD ] resource load_15min missing a
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: warning [WD ] resource memory_used missing
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: info [WD ] no resources configured.
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: notice [SERV ] Service engine loaded: corosy
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: notice [QUORUM] Using quorum provider corosyn
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: crit [QUORUM] Quorum provider: corosync_vot
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: error [SERV ] Service engine 'corosync_quor
Oct 16 12:50:53 host01.nexahost.live corosync[28324]: error [MAIN ] Corosync Cluster Engine exiti
Oct 16 12:50:53 host01.nexahost.live systemd[1]: corosync.service: Main process exited, code=exited,
Oct 16 12:50:53 host01.nexahost.live systemd[1]: Failed to start Corosync Cluster Engine.
 
Code:
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[1027]:  [MAIN  ] parse error in config: No multicast address specified
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[1027]: error   [MAIN  ] parse error in config: No multicast address specified
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[1027]:  [MAIN  ] Corosync Cluster Engine exiting with status 8 at main.c:1416.
Oct 16 12:33:50 cloudadmin.nexahost.live corosync[1027]: error   [MAIN  ] Corosync Cluster Engine exiting with status 8 at main.c:1416.
Oct 16 12:33:50 cloudadmin.nexahost.live systemd[1]: corosync.service: Main process exited, code=exited, status=8/n/a
Oct 16 12:33:50 cloudadmin.nexahost.live systemd[1]: Failed to start Corosync Cluster Engine.
Oct 16 12:33:50 cloudadmin.nexahost.live systemd[1]: corosync.service: Unit entered failed state.
Oct 16 12:33:50 cloudadmin.nexahost.live systemd[1]: corosync.service: Failed with result 'exit-code'.

Please make sure that your '/etc/pve/corosync.conf' has the correct multicast address and port set, the parameter names are 'mcastaddr' and 'mcastport'. For updating your corosync config, follow the instructions here.
 
i just reconfigure both node.

can you explain me how can we connected both nodes without any error.


Note:- Both are working in debian 9.
Both are using Proxmox ve 5.4
 
i just reconfigure both node.

The node you want to add to the cluster should not have a 'corosync.conf'. Is the node already part of a different cluster?
Check by running 'pvecm status' on that node. If it is part of a different cluster then you would need to first remove the node from that cluster, see here before joining a new cluster.

Is corosync now running on the cluster nodes? You can check with
Code:
pvecm status
pvecm nodes

If the cluster is up and running, you should be able to add the new node running
Code:
pvecm add IP-OF-ANY-CLUSTER-NODE
on the new node.
 
again facing same issue when i just connected node to cluster it's show the error:-

1571287613723.png

* local node address: cannot use IP '192.99.144.42', it must be configured exactly once on local node!

Check if node may join a cluster failed!

In UI, Node are not join in any Cluster.

1571289485814.png

Please explain the issue, And how can i resolved it.
 
Last edited:
If you look carefully you will see that it is not exactly the same error message anymore, the line
Code:
corosync is already running, is this node already in a cluster?!
is not there anymore.

For the remaining error, is '192.99.144.42' the IP this node should have? Please make sure '/etc/network/interfaces' and '/etc/hosts' have the correct address and that the address really isn't used by any other node.
 
Could you post the contents of '/etc/network/interfaces' and the output of 'ip address show' on the node you want to join?
 
You do have the same IP configured twice on the interface ens3. Once with /24 and once with /32. That's most probably why 'pvecm' isn't happy.
After you fix your network configuration 'pvecm add' should work.
Here are a few examples of what it might look like, but of course it highly depends on your setup.
 
@Fabian_E Thanks for reply

but another error are shown :-
1571301097457.png



In Cluster UI, Nodes are shown but cannot active :-

1571301321859.png

Also not Login in node UI :-
1571301604387.png


Please Help Me
 
Last edited:
Is corosync now functioning properly on the cluster node?

Is there anything in
Code:
journalctl -u corosync

What do
Code:
pvecm status
pvecm nodes
say?
 
## On node (journalctl -u corosync):-
Code:
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: info    [MAIN  ] Corosync built-in features: dbus rdma monitoring watchdog systemd xmlconf qdevices qnetd snmp pie relr
o bindnow
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]:  [MAIN  ] interface section bindnetaddr is used together with nodelist. Nodelist one is going to be used.
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: warning [MAIN  ] interface section bindnetaddr is used together with nodelist. Nodelist one is going to be used.
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]:  [MAIN  ] Please migrate config file to nodelist.
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: warning [MAIN  ] Please migrate config file to nodelist.
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: notice  [TOTEM ] Initializing transport (UDP/IP Multicast).
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: notice  [TOTEM ] Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]:  [TOTEM ] Initializing transport (UDP/IP Multicast).
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]:  [TOTEM ] Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: notice  [TOTEM ] The network interface [192.99.144.42] is now up.
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]:  [TOTEM ] The network interface [192.99.144.42] is now up.
Oct 17 13:55:55 cloud01.skilledrich.xyz systemd[1]: Started Corosync Cluster Engine.
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: notice  [SERV  ] Service engine loaded: corosync configuration map access [0]
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: info    [QB    ] server name: cmap
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: notice  [SERV  ] Service engine loaded: corosync configuration service [1]
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: info    [QB    ] server name: cfg
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: notice  [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: info    [QB    ] server name: cpg
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: notice  [SERV  ] Service engine loaded: corosync profile loading service [4]
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: notice  [SERV  ] Service engine loaded: corosync resource monitoring service [6]
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: warning [WD    ] Watchdog not enabled by configuration
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: warning [WD    ] resource load_15min missing a recovery key.
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: warning [WD    ] resource memory_used missing a recovery key.
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: info    [WD    ] no resources configured.
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: notice  [SERV  ] Service engine loaded: corosync watchdog service [7]
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: notice  [QUORUM] Using quorum provider corosync_votequorum
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: notice  [SERV  ] Service engine loaded: corosync vote quorum service v1.0 [5]
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: info    [QB    ] server name: votequorum
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: notice  [SERV  ] Service engine loaded: corosync cluster quorum service v0.1 [3]
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: info    [QB    ] server name: quorum
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: notice  [TOTEM ] A new membership (192.99.144.42:4) was formed. Members joined: 2
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: warning [CPG   ] downlist left_list: 0 received
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: notice  [QUORUM] Members[1]: 2
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]: notice  [MAIN  ] Completed service synchronization, ready to provide service.
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]:  [SERV  ] Service engine loaded: corosync configuration map access [0]
Oct 17 13:55:55 cloud01.skilledrich.xyz corosync[7922]:  [QB    ] server name: cmap

##On node
Code:
root@cloud01:~# pvecm status
Quorum information
------------------
Date:             Thu Oct 17 14:54:47 2019
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000002
Ring ID:          2/4
Quorate:          No

Votequorum information
----------------------
Expected votes:   2
Highest expected: 2
Total votes:      1
Quorum:           2 Activity blocked
Flags:

Membership information
----------------------
    Nodeid      Votes Name
0x00000002          1 192.99.144.42 (local)

## On Node:-
Code:
root@cloud01:~# pvecm nodes

Membership information
----------------------
    Nodeid      Votes Name
         2          1 192.99.144.42 (local)
 
#On Cluster (journalctl -u corosync ):-
Code:
-- Logs begin at Thu 2019-10-17 10:58:03 IST, end at Thu 2019-10-17 14:59:54 IST. --
Oct 17 10:58:10 node01.nexahost.live systemd[1]: Starting Corosync Cluster Engine...
Oct 17 10:58:10 node01.nexahost.live corosync[1162]:  [MAIN  ] Corosync Cluster Engine ('2.4.4-dirty'): started and ready to provide service.
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: notice  [MAIN  ] Corosync Cluster Engine ('2.4.4-dirty'): started and ready to provide service.
Oct 17 10:58:10 node01.nexahost.live corosync[1162]:  [MAIN  ] Corosync built-in features: dbus rdma monitoring watchdog systemd xmlconf qdevices qnetd snmp pie relro bindnow
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: info    [MAIN  ] Corosync built-in features: dbus rdma monitoring watchdog systemd xmlconf qdevices qnetd snmp pie relro b
Oct 17 10:58:10 node01.nexahost.live corosync[1162]:  [MAIN  ] interface section bindnetaddr is used together with nodelist. Nodelist one is going to be used.
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: warning [MAIN  ] interface section bindnetaddr is used together with nodelist. Nodelist one is going to be used.
Oct 17 10:58:10 node01.nexahost.live corosync[1162]:  [MAIN  ] Please migrate config file to nodelist.
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: warning [MAIN  ] Please migrate config file to nodelist.
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: notice  [TOTEM ] Initializing transport (UDP/IP Multicast).
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: notice  [TOTEM ] Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Oct 17 10:58:10 node01.nexahost.live corosync[1162]:  [TOTEM ] Initializing transport (UDP/IP Multicast).
Oct 17 10:58:10 node01.nexahost.live corosync[1162]:  [TOTEM ] Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: notice  [TOTEM ] The network interface [157.245.141.148] is now up.
Oct 17 10:58:10 node01.nexahost.live corosync[1162]:  [TOTEM ] The network interface [157.245.141.148] is now up.
Oct 17 10:58:10 node01.nexahost.live corosync[1162]:  [SERV  ] Service engine loaded: corosync configuration map access [0]
Oct 17 10:58:10 node01.nexahost.live systemd[1]: Started Corosync Cluster Engine.
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: notice  [SERV  ] Service engine loaded: corosync configuration map access [0]
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: info    [QB    ] server name: cmap
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: notice  [SERV  ] Service engine loaded: corosync configuration service [1]
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: info    [QB    ] server name: cfg
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: notice  [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: info    [QB    ] server name: cpg
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: notice  [SERV  ] Service engine loaded: corosync profile loading service [4]
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: notice  [SERV  ] Service engine loaded: corosync resource monitoring service [6]
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: warning [WD    ] Watchdog not enabled by configuration
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: warning [WD    ] resource load_15min missing a recovery key.
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: warning [WD    ] resource memory_used missing a recovery key.
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: info    [WD    ] no resources configured.
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: notice  [SERV  ] Service engine loaded: corosync watchdog service [7]
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: notice  [QUORUM] Using quorum provider corosync_votequorum
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: notice  [QUORUM] This node is within the primary component and will provide service.
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: notice  [QUORUM] Members[0]:
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: notice  [SERV  ] Service engine loaded: corosync vote quorum service v1.0 [5]
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: info    [QB    ] server name: votequorum
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: notice  [SERV  ] Service engine loaded: corosync cluster quorum service v0.1 [3]
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: info    [QB    ] server name: quorum
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: notice  [TOTEM ] A new membership (157.245.141.148:52) was formed. Members joined: 1
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: warning [CPG   ] downlist left_list: 0 received
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: notice  [QUORUM] Members[1]: 1
Oct 17 10:58:10 node01.nexahost.live corosync[1162]: notice  [MAIN  ] Completed service synchronization, ready to provide service.
Oct 17 10:58:10 node01.nexahost.live corosync[1162]:  [QB    ] server name: cmap

Code:
root@node01:~# pvecm status
Quorum information
------------------
Date:             Thu Oct 17 15:00:57 2019
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000001
Ring ID:          1/64
Quorate:          No

Votequorum information
----------------------
Expected votes:   2
Highest expected: 2
Total votes:      1
Quorum:           2 Activity blocked
Flags:

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 157.245.141.148 (local)

Code:
root@node01:~# pvecm node

Membership information
----------------------
    Nodeid      Votes Name
         1          1 157.245.141.148 (local)
 
I'm guessing that the join request was initiated but not finished off correctly.
Does
Code:
systemctl restart pve-cluster.service corosync.service
on both nodes help?
 
@Fabian_E Thanks for reply

## On Cluster :-
1571307752631.png
it's working fine

##On Node :-
1571307809485.png
also no error..

Can i stop the process or waiting for process response..
1571307881888.png


Please share me suggestion for connected these node to the cluster.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!