Problems with configuring Two-Node cluster

dgorobets

New Member
Feb 25, 2015
3
0
1
Hi,
I'm making a cluster with two nodes only and getting problems with it. Each node seeing only itself.

Cluster.conf:
Code:
<?xml version="1.0"?><cluster name="pve-cluster" config_version="14">
 <cman two_node="1" expected_votes="1" keyfile="/var/lib/pve-cluster/corosync.authkey" transport="udpu">
  </cman>
  <clusternodes>
  <clusternode name="sdext65" votes="1" nodeid="1"/>
  <clusternode name="sdext67" votes="1" nodeid="2"/></clusternodes>
</cluster>

Physical servers are located in different networks and each has a public ip. That's why I don't use multicast and use udpu as a transport.

Servers have an iptables rule allowing access to each other:
/sbin/iptables -A INPUT -s 1.2.3.4/32 -p tcp --dport 0:65535 -j ACCEPT
/sbin/iptables -A INPUT -s 1.2.3.4/32 -p icmp --icmp-type echo-request -j ACCEPT


Versions of proxmox on nodes are the same:
proxmox-ve-2.6.32: 3.3-147 (running kernel: 2.6.32-37-pve)pve-manager: 3.4-1 (running version: 3.4-1/3f2d890e)
pve-kernel-2.6.32-37-pve: 2.6.32-147
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-16
qemu-server: 3.3-20
pve-firmware: 1.1-3
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-31
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-12
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

"pvecm nodes" shows on first node:
Node Sts Inc Joined Name
1 M 40 2015-02-25 09:30:40 sdext65
2 X 0 sdext67
and second:
Node Sts Inc Joined Name
1 X 0 sdext65
2 M 20 2015-02-25 09:29:50 sdext67


"cat /etc/pve/.members" on first nodes:
Code:
{
"nodename": "sdext65",
"version": 3,
"cluster": { "name": "pve-cluster", "version": 14, "nodes": 2, "quorate": 1 },
"nodelist": {
  "sdext65": { "id": 1, "online": 1, "ip": "1.2.3.4"},
  "sdext67": { "id": 2, "online": 0}
  }
}
and second:
Code:
{
"nodename": "sdext67",
"version": 5,
"cluster": { "name": "pve-cluster", "version": 14, "nodes": 2, "quorate": 1 },
"nodelist": {
  "sdext65": { "id": 1, "online": 0},
  "sdext67": { "id": 2, "online": 1, "ip": "4.3.2.1"}
  }
}

"pvecm status" on first node shows:
Version: 6.2.0Config Version: 14
Cluster Name: pve-cluster
Cluster Id: 23476
Cluster Member: Yes
Cluster Generation: 40
Membership state: Cluster-Member
Nodes: 1
Expected votes: 1
Total votes: 1
Node votes: 1
Quorum: 1
Active subsystems: 5
Flags: 2node
Ports Bound: 0
Node name: sdext65
Node ID: 1
Multicast addresses: 255.255.255.255
Node addresses: 1.2.3.4
and second:
Version: 6.2.0
Config Version: 14
Cluster Name: pve-cluster
Cluster Id: 23476
Cluster Member: Yes
Cluster Generation: 20
Membership state: Cluster-Member
Nodes: 1
Expected votes: 1
Total votes: 1
Node votes: 1
Quorum: 1
Active subsystems: 5
Flags: 2node
Ports Bound: 0
Node name: sdext67
Node ID: 2
Multicast addresses: 255.255.255.255
Node addresses: 4.3.2.1



Does anyone know what's the problem?
 
Last edited:
I added one more rule allowing udp:
/sbin/iptables -A INPUT -s 1.2.3.4/32 -p udp --dport 0:65535 -j ACCEPT

Now "pvecm nodes" shows both nodes on each server, but in GUI on first node - second node is still red. The same in second node.