Hi all,
i have added a node (192.168.1.140) to my proxmox cluster (192.168.1.120). The initial add didn't work properly via the GUI because an NFS share wasn't accessible to the node (at least I could see it as a warning in the log). But the node was added to the cluster and I can access the node in the cluster and perform the general functions.
Any ideas to fix the messages, or is it a bug!?
Those are the errors:
/etc/corosync/corosync.conf
root@futro-s940:~# pvecm status (NODE)
root@futro-s920:~# pvecm status (CLUSTER)
Both /etc/hosts
i have added a node (192.168.1.140) to my proxmox cluster (192.168.1.120). The initial add didn't work properly via the GUI because an NFS share wasn't accessible to the node (at least I could see it as a warning in the log). But the node was added to the cluster and I can access the node in the cluster and perform the general functions.
Any ideas to fix the messages, or is it a bug!?
Those are the errors:
Dec 25 19:42:28 futro-s940 pmxcfs[796]: [quorum] crit: quorum_initialize failed: 2
Dec 25 19:42:28 futro-s940 pmxcfs[796]: [quorum] crit: can't initialize service
Dec 25 19:42:28 futro-s940 pmxcfs[796]: [confdb] crit: cmap_initialize failed: 2
Dec 25 19:42:28 futro-s940 pmxcfs[796]: [confdb] crit: can't initialize service
Dec 25 19:42:28 futro-s940 pmxcfs[796]: [dcdb] crit: cpg_initialize failed: 2
Dec 25 19:42:28 futro-s940 pmxcfs[796]: [dcdb] crit: can't initialize service
Dec 25 19:42:28 futro-s940 pmxcfs[796]: [status] crit: cpg_initialize failed: 2
Dec 25 19:42:28 futro-s940 pmxcfs[796]: [status] crit: can't initialize service
/etc/corosync/corosync.conf
Code:
logging {
debug: off
to_syslog: yes
}
nodelist {
node {
name: futro-s920
nodeid: 1
quorum_votes: 1
ring0_addr: 192.168.1.120
}
node {
name: futro-s940
nodeid: 2
quorum_votes: 1
ring0_addr: 192.168.1.140
}
}
quorum {
provider: corosync_votequorum
}
totem {
cluster_name: proxmox-cluster
config_version: 2
interface {
linknumber: 0
}
ip_version: ipv4-6
link_mode: passive
secauth: on
version: 2
}
root@futro-s940:~# pvecm status (NODE)
Cluster information
-------------------
Name: proxmox-cluster
Config Version: 2
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Sat Dec 25 19:55:32 2021
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000002
Ring ID: 1.2f
Quorate: Yes
Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 2
Quorum: 2
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000001 1 192.168.1.120
0x00000002 1 192.168.1.140 (local)
root@futro-s920:~# pvecm status (CLUSTER)
Cluster information
-------------------
Name: proxmox-cluster
Config Version: 2
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Sat Dec 25 19:56:12 2021
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000001
Ring ID: 1.2f
Quorate: Yes
Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 2
Quorum: 2
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000001 1 192.168.1.120 (local)
0x00000002 1 192.168.1.140
Both /etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.1.120 futro-s920.localdomain futro-s920
192.168.1.140 futro-s940.localdomain futro-s940
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
Last edited: