Hi, I'm having an issue setuping a cluster using latest proxmox4 (clean install, no upgrade from proxmox 3)
I'm currently trying to establish cluster between 2 nodes : node1 and node2.
multicast is working, confirmed by omping.
This is /etc/hosts from node1 :
This is /etc/hosts from node2 :
This is what I inputed on node1 :
This is what I inputed on node2 :
I've got a strange feeling when I saw this line in logfile on node2:
(the network interface is ofcourse up)
And this is /etc/corosync/corosync.conf on node2
IMHO, bindnetaddr apear to be wrong as it should be 10.10.10.2 on this node (node2).
If I manually edit bindnetaddr in /etc/corosync/corosync.conf and later manually start corosync, cluster establish successfully, but /etc/corosync/corosync.conf is reverted back to the wrong ip so the change won't survive a reboot.
Have I done something wrong during cluster setup, or did I hit a glitch ?
I'm currently trying to establish cluster between 2 nodes : node1 and node2.
multicast is working, confirmed by omping.
This is /etc/hosts from node1 :
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
10.10.10.1 node1.domain.tld node1 pvelocalhost
10.10.10.2 node2.domain.tld node2
10.10.10.3 node3.domain.tld node3
This is /etc/hosts from node2 :
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
10.10.10.1 node1.domain.tld node1
10.10.10.2 node2.domain.tld node2 pvelocalhost
10.10.10.3 node3.domain.tld node3
This is what I inputed on node1 :
$ pvecm create SHIELD
This is what I inputed on node2 :
$ pvecm add 10.10.10.1
copy corosync auth key
stopping pve-cluster service
backup old database
Job for corosync.service failed. See 'systemctl status corosync.service' and 'journalctl -xn' for details.
waiting for quorum...
I've got a strange feeling when I saw this line in logfile on node2:
corosync[14980]: [TOTEM ] The network interface is down.
(the network interface is ofcourse up)
And this is /etc/corosync/corosync.conf on node2
logging {
debug: off
to_syslog: yes
}
nodelist {
node {
nodeid: 1
quorum_votes: 1
ring0_addr: node1
}
node {
nodeid: 2
quorum_votes: 1
ring0_addr: node2
}
}
quorum {
provider: corosync_votequorum
}
totem {
cluster_name: SHIELD
config_version: 2
ip_version: ipv4
secauth: on
version: 2
interface {
bindnetaddr: 10.10.10.1
ringnumber: 0
}
}
IMHO, bindnetaddr apear to be wrong as it should be 10.10.10.2 on this node (node2).
If I manually edit bindnetaddr in /etc/corosync/corosync.conf and later manually start corosync, cluster establish successfully, but /etc/corosync/corosync.conf is reverted back to the wrong ip so the change won't survive a reboot.
Have I done something wrong during cluster setup, or did I hit a glitch ?
Last edited: