So, I was following the directions to separate the cluster network, and things started to seem like they were going alright with the first node.
However, when I rebooted the node, Datacenter view\Cluster says,
pvecm status returns:
Cannot initialize CMAP service.
PVE 5.4-13 (I'll be upgrading the nodes to PVE 6 shortly though)
/etc/pve/corosync.conf:
systemctl status pve-cluster:
I'm not sure what's going on...
I couldn't have fired the cluster up, because the IP address of node1 is now the network's gateway..
I should also note that I added ring1 during this, as I only originally had ring0 set up.
However, when I rebooted the node, Datacenter view\Cluster says,
missing ':' after key 'interface' (500)
. I then check the /etc/pve/corosync.conf file and noticed I missed a space after bindnetaddress:
, so syntax makes it incorrect. So, I edit the file and fix that issue. Now, the corosync.conf files in /etc/pve and /etc/corosync both are identical, but the GUI still shows that error.pvecm status returns:
Cannot initialize CMAP service.
PVE 5.4-13 (I'll be upgrading the nodes to PVE 6 shortly though)
/etc/pve/corosync.conf:
Code:
root@PVE-1:~# cat /etc/pve/corosync.conf
logging {
debug: off
to_syslog: yes
}
nodelist {
node {
name: PVE-1
nodeid: 1
quorum_votes: 1
ring0_addr: 10.9.220.11
ring1_addr: 172.16.0.11
}
node {
name: PVE-2
nodeid: 2
quorum_votes: 1
ring0_addr: 10.9.220.12
ring1_addr: 172.16.0.12
}
node {
name: PVE-Witness
nodeid: 3
quorum_votes: 1
ring0_addr: 10.9.220.49
ring1_addr: 172.16.0.49
}
}
quorum {
provider: corosync_votequorum
}
totem {
cluster_name: PVECluster
config_version: 11
interface {
bindnetaddr: 10.9.220.11
ringnumber: 0
}
interface [
bindnetaddr: 172.16.0.12
ringnumber: 1
}
ip_version: ipv4
secauth: on
version: 2
}
systemctl status pve-cluster:
Code:
root@PVE-1:~# systemctl status pve-cluster
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset:
Active: active (running) since Mon 2020-01-06 22:48:07 EST; 2min 48s ago
Process: 4480 ExecStartPost=/usr/bin/pvecm updatecerts --silent (code=exited, stat
Process: 4312 ExecStart=/usr/bin/pmxcfs (code=exited, status=0/SUCCESS)
Main PID: 4373 (pmxcfs)
Tasks: 6 (limit: 7372)
Memory: 53.6M
CPU: 1.139s
CGroup: /system.slice/pve-cluster.service
└─4373 /usr/bin/pmxcfs
Jan 06 22:50:42 PVE-1 pmxcfs[4373]: [dcdb] crit: cpg_initialize failed: 2
Jan 06 22:50:42 PVE-1 pmxcfs[4373]: [status] crit: cpg_initialize failed: 2
Jan 06 22:50:48 PVE-1 pmxcfs[4373]: [quorum] crit: quorum_initialize failed: 2
Jan 06 22:50:48 PVE-1 pmxcfs[4373]: [confdb] crit: cmap_initialize failed: 2
Jan 06 22:50:48 PVE-1 pmxcfs[4373]: [dcdb] crit: cpg_initialize failed: 2
Jan 06 22:50:48 PVE-1 pmxcfs[4373]: [status] crit: cpg_initialize failed: 2
Jan 06 22:50:54 PVE-1 pmxcfs[4373]: [quorum] crit: quorum_initialize failed: 2
Jan 06 22:50:54 PVE-1 pmxcfs[4373]: [confdb] crit: cmap_initialize failed: 2
Jan 06 22:50:54 PVE-1 pmxcfs[4373]: [dcdb] crit: cpg_initialize failed: 2
Jan 06 22:50:54 PVE-1 pmxcfs[4373]: [status] crit: cpg_initialize failed: 2
I'm not sure what's going on...
I couldn't have fired the cluster up, because the IP address of node1 is now the network's gateway..
I should also note that I added ring1 during this, as I only originally had ring0 set up.
Last edited: