Proxmox 4.1 - problem when adding node to cluster

Mikhail

New Member
Mar 8, 2016
4
0
1
53
You have two nodes in DC. Each node has two NIC - external (static IP) and "internal" (10Gbit connected directly, without switch).
I've created cluster on the node1, between nodes on the "internal" interface multicast works:

From node1 (ip 10.10.10.2)

Code:
root@srv2:/etc/corosync# omping  10.10.10.2 10.10.10.4
10.10.10.4 : joined (S,G) = (*, 232.43.211.234), pinging
10.10.10.4 :   unicast, seq=1, size=69 bytes, dist=0, time=0.255ms
10.10.10.4 : multicast, seq=1, size=69 bytes, dist=0, time=0.298ms
10.10.10.4 :   unicast, seq=2, size=69 bytes, dist=0, time=0.191ms
10.10.10.4 : multicast, seq=2, size=69 bytes, dist=0, time=0.238ms
10.10.10.4 :   unicast, seq=3, size=69 bytes, dist=0, time=0.241ms
10.10.10.4 : multicast, seq=3, size=69 bytes, dist=0, time=0.242ms
10.10.10.4 :   unicast, seq=4, size=69 bytes, dist=0, time=0.246ms
10.10.10.4 : multicast, seq=4, size=69 bytes, dist=0, time=0.279ms
10.10.10.4 :   unicast, seq=5, size=69 bytes, dist=0, time=0.248ms
10.10.10.4 : multicast, seq=5, size=69 bytes, dist=0, time=0.297ms
10.10.10.4 :   unicast, seq=6, size=69 bytes, dist=0, time=0.232ms
10.10.10.4 : multicast, seq=6, size=69 bytes, dist=0, time=0.294ms
^C
10.10.10.4 :   unicast, xmt/rcv/%loss = 6/6/0%, min/avg/max/std-dev = 0.191/0.235/0.255/0.023
10.10.10.4 : multicast, xmt/rcv/%loss = 6/6/0%, min/avg/max/std-dev = 0.238/0.275/0.298/0.028

From node2 (ip 10.10.10.4)
Code:
root@server32:~# omping  10.10.10.4 10.10.10.2
10.10.10.2 : waiting for response msg
10.10.10.2 : waiting for response msg
10.10.10.2 : joined (S,G) = (*, 232.43.211.234), pinging
10.10.10.2 :   unicast, seq=1, size=69 bytes, dist=0, time=0.185ms
10.10.10.2 : multicast, seq=1, size=69 bytes, dist=0, time=0.195ms
10.10.10.2 :   unicast, seq=2, size=69 bytes, dist=0, time=0.225ms
10.10.10.2 : multicast, seq=2, size=69 bytes, dist=0, time=0.228ms
10.10.10.2 : multicast, seq=3, size=69 bytes, dist=0, time=0.275ms
10.10.10.2 :   unicast, seq=3, size=69 bytes, dist=0, time=0.310ms
10.10.10.2 :   unicast, seq=4, size=69 bytes, dist=0, time=0.194ms
10.10.10.2 : multicast, seq=4, size=69 bytes, dist=0, time=0.195ms
10.10.10.2 :   unicast, seq=5, size=69 bytes, dist=0, time=0.199ms
10.10.10.2 : multicast, seq=5, size=69 bytes, dist=0, time=0.207ms
10.10.10.2 : waiting for response msg
10.10.10.2 : server told us to stop
^C
10.10.10.2 :   unicast, xmt/rcv/%loss = 5/5/0%, min/avg/max/std-dev = 0.185/0.223/0.310/0.051
10.10.10.2 : multicast, xmt/rcv/%loss = 5/5/0%, min/avg/max/std-dev = 0.195/0.220/0.275/0.034

When i add node2 to cluster (pvecm add 10.10.10.2), the process stops at step "Waiting for quorum...". I think it's because corosinc trying to connect through the external interface:

From node1 (node2 now removed from cluster):
Code:
root@srv2:/etc/corosync# pvecm status

Quorum information
------------------
Date:             Tue Mar  8 15:25:20 2016
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000001
Ring ID:          20
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   1
Highest expected: 1
Total votes:      1
Quorum:           1 
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 104.x.x.56 (local)

0x00000001 1 104.x.x.56 (local)


But i need to make cluster via a direct 10Gbit connection. How to do it?

Additional information:
Node1 NIC - eth1 10Gbit "internal" 10.10.10.2, eth2 104.x.x.56
Node1 hosts
Code:
127.0.0.1       localhost.localdomain localhost
104.x.x.56   srv2.mydomain.com srv2 pvelocalhost
10.10.10.4      server32.mydomain.com server32
10.10.10.2      srv2.mydomain.com srv2 pvelocalhost

Node2 NIC - eth1 10Gbit "internal" 10.10.10.4, eth2 38.x.x.156
Node2 hosts
Code:
127.0.0.1       localhost.localdomain localhost
38.x.x.156  server32.mydomain.com   server32 pvelocalhost
10.10.10.4 server32.mydomain.com   server32
10.10.10.2 srv2.mydomain.com srv2

Can you bind the cluster to the required interface? Or tell me, please, what was going on.

WBR
 
on both nodes your pvelocalhost in /etc/hosts must be on the 10GB-NIC (and the hostname also).
That is, the hosts must be:
127.0.0.1 localhost.localdomain localhost
10.10.10.4 server32.mydomain.com server32
10.10.10.2 srv2.mydomain.com srv2 pvelocalhost
yes?

Then i have two questions -

1. How to change node1 ip in corosync/cluster config?
2. Whether will work Proxmox web interface on an external IP?

Tnx for the quick response, udo.
 
That is, the hosts must be:
127.0.0.1 localhost.localdomain localhost
10.10.10.4 server32.mydomain.com server32
10.10.10.2 srv2.mydomain.com srv2 pvelocalhost
yes?

Then i have two questions -

1. How to change node1 ip in corosync/cluster config?
remove the node from the cluster and join it again (with force?).
2. Whether will work Proxmox web interface on an external IP?
PVE listen to all defined networks ( 0.0.0.0:8006 ) - If you don't like this, use a firewall (external or inside pve)

Udo
 
  • Like
Reactions: Mikhail

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!