[SOLVED] Fresh Cluster installation issues

Jul 9, 2021
4
1
3
47
We installed 3 identical dedicated servers (@Hetzner) with 3 networks:

Network A 1GBit: Outside External IPs
Network B 10GBit: Management. 192.168.51.1 - 192.168.51.3 /24
Network C 10GBit: Ceph 192.168.52.1 - 192.168.52.3 /24

All 10 servers have been installed with Proxmox 7.

On Node 1 we created a cluster and then added Node 2 and Node 3 manually with the Management IPs. The automatic join information always contained the external IP so this is the reason why we added it manually with the correct IP.

On the UI side all nodes appear on Node 1 and all nodes are green. As soon as I select node 2 or node 3 I get

Connection error 595: No route to host

The same behavior if I logon to web UI of node 2 or node 3 - the other nodes selected get Connection error 595: No route to host.

What are we missing ?

Output of pvecm status on Node 1:
Code:
Cluster information
-------------------
Name:             xxxxxxxx
Config Version:   3
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Fri Jul  9 09:27:09 2021
Quorum provider:  corosync_votequorum
Nodes:            3
Node ID:          0x00000001
Ring ID:          1.23
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      3
Quorum:           2
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 192.168.51.1 (local)
0x00000002          1 192.168.51.2
0x00000003          1 192.168.51.3



Output of pvecm status on Node 2:
Code:
Cluster information
-------------------
Name:             xxxxxxxxx
Config Version:   3
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Fri Jul  9 09:34:21 2021
Quorum provider:  corosync_votequorum
Nodes:            3
Node ID:          0x00000002
Ring ID:          1.23
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      3
Quorum:           2
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 192.168.51.1
0x00000002          1 192.168.51.2 (local)
0x00000003          1 192.168.51.3


Output of pvecm status on Node 3:
Code:
Cluster information
-------------------
Name:             xxxxxx
Config Version:   3
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Fri Jul  9 09:35:06 2021
Quorum provider:  corosync_votequorum
Nodes:            3
Node ID:          0x00000003
Ring ID:          1.23
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      3
Quorum:           2
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 192.168.51.1
0x00000002          1 192.168.51.2
0x00000003          1 192.168.51.3 (local)


Corosync Status Node 1
Code:
Jul 08 23:52:43 node1 corosync[1531]:   [KNET  ] host: host: 3 (passive) best link: 0 (pri: 1)
Jul 08 23:52:43 node1 corosync[1531]:   [KNET  ] host: host: 3 has no active links
Jul 08 23:52:47 node1 corosync[1531]:   [KNET  ] rx: host: 3 link: 0 is up
Jul 08 23:52:47 node1 corosync[1531]:   [KNET  ] host: host: 3 (passive) best link: 0 (pri: 1)
Jul 08 23:52:47 node1 corosync[1531]:   [KNET  ] pmtud: PMTUD link change for host: 3 link: 0 from 469 to 1397
Jul 08 23:52:47 node1 corosync[1531]:   [QUORUM] Sync members[3]: 1 2 3
Jul 08 23:52:47 node1 corosync[1531]:   [QUORUM] Sync joined[1]: 3
Jul 08 23:52:47 node1 corosync[1531]:   [TOTEM ] A new membership (1.23) was formed. Members joined: 3
Jul 08 23:52:47 node1 corosync[1531]:   [QUORUM] Members[3]: 1 2 3
Jul 08 23:52:47 node1 corosync[1531]:   [MAIN  ] Completed service synchronization, ready to provide service.

Corosync Status Node 2
Code:
Jul 08 23:52:43 node2 corosync[1591]:   [KNET  ] host: host: 3 (passive) best link: 0 (pri: 1)
Jul 08 23:52:43 node2 corosync[1591]:   [KNET  ] host: host: 3 has no active links
Jul 08 23:52:47 node2 corosync[1591]:   [KNET  ] rx: host: 3 link: 0 is up
Jul 08 23:52:47 node2 corosync[1591]:   [KNET  ] host: host: 3 (passive) best link: 0 (pri: 1)
Jul 08 23:52:47 node2 corosync[1591]:   [KNET  ] pmtud: PMTUD link change for host: 3 link: 0 from 469 to 1397
Jul 08 23:52:47 node2 corosync[1591]:   [QUORUM] Sync members[3]: 1 2 3
Jul 08 23:52:47 node2 corosync[1591]:   [QUORUM] Sync joined[1]: 3
Jul 08 23:52:47 node2 corosync[1591]:   [TOTEM ] A new membership (1.23) was formed. Members joined: 3
Jul 08 23:52:47 node2 corosync[1591]:   [QUORUM] Members[3]: 1 2 3
Jul 08 23:52:47 node2 corosync[1591]:   [MAIN  ] Completed service synchronization, ready to provide service.

Corosync Status Node 3
Code:
Jul 08 23:52:47 node3 corosync[45901]:   [KNET  ] host: host: 2 (passive) best link: 0 (pri: 1)
Jul 08 23:52:47 node3 corosync[45901]:   [KNET  ] pmtud: PMTUD link change for host: 2 link: 0 from 469 to 1397
Jul 08 23:52:47 node3 corosync[45901]:   [KNET  ] pmtud: PMTUD link change for host: 1 link: 0 from 469 to 1397
Jul 08 23:52:47 node3 corosync[45901]:   [KNET  ] pmtud: Global data MTU changed to: 1397
Jul 08 23:52:47 node3 corosync[45901]:   [QUORUM] Sync members[3]: 1 2 3
Jul 08 23:52:47 node3 corosync[45901]:   [QUORUM] Sync joined[2]: 1 2
Jul 08 23:52:47 node3 corosync[45901]:   [TOTEM ] A new membership (1.23) was formed. Members joined: 1 2
Jul 08 23:52:47 node3 corosync[45901]:   [QUORUM] This node is within the primary component and will provide service.
Jul 08 23:52:47 node3 corosync[45901]:   [QUORUM] Members[3]: 1 2 3
Jul 08 23:52:47 node3 corosync[45901]:   [MAIN  ] Completed service synchronization, ready to provide service.
 
Have you restarted the servers already? If not, then systemctl restart pveproxy might be worth a try.

Could you please also post the following?
Code:
cat /etc/network/interfaces
cat /etc/hosts
 
Have you restarted the servers already? If not, then systemctl restart pveproxy might be worth a try.

Could you please also post the following?
Code:
cat /etc/network/interfaces
cat /etc/hosts
Hi Dominic,

all servers have been rebooted already which did not help.

node1: /etc/network/interfaces
Code:
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto enp35s0
iface enp35s0 inet manual

auto enp1s0f0
iface enp1s0f0 inet static
    address 192.168.51.1/24
#mgm

auto enp1s0f1
iface enp1s0f1 inet static
    address 192.168.52.1/24
#ceph

auto vmbr0
iface vmbr0 inet static
    address 162.xx.xx.100/26
    gateway 162.xx.xx.1
    bridge-ports enp35s0
    bridge-stp off
    bridge-fd 0
    hwaddress ether a8:a1:59:0f:09:85
node1: /etc/hosts
Code:
127.0.0.1 localhost.localdomain localhost
162.xx.xx.100 node1.hostname.de node1

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

node2: /etc/network/interfaces
Code:
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto enp41s0
iface enp41s0 inet manual

auto enp33s0f0
iface enp33s0f0 inet static
    address 192.168.51.2/24
#mgm

auto enp33s0f1
iface enp33s0f1 inet static
    address 192.168.52.2/24
#ceph

auto vmbr0
iface vmbr0 inet static
    address 162.xx.xx.101/26
    gateway 162.xx.xx.1
    bridge-ports enp41s0
    bridge-stp off
    bridge-fd 0
    hwaddress ether d0:50:99:f9:1e:9a

node2: /etc/hosts
Code:
127.0.0.1 localhost.localdomain localhost
162.xx.xx.101 node2.hostname.de node2

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

node3: /etc/network/interfaces
Code:
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto enp41s0
iface enp41s0 inet manual

auto enp33s0f0
iface enp33s0f0 inet static
    address 192.168.51.3/24
#mgm

auto enp33s0f1
iface enp33s0f1 inet static
    address 192.168.52.3/24
#ceph

auto vmbr0
iface vmbr0 inet static
    address 162.xx.xx.102/26
    gateway 162.xx.xx.1
    bridge-ports enp41s0
    bridge-stp off
    bridge-fd 0
    hwaddress ether a8:a1:59:8b:25:b7

node3: /etc/hosts
Code:
127.0.0.1 localhost.localdomain localhost
162.xx.xx.102 node3.hostname.de node3

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
 
I just checked /etc/pve/.members :

Why is the public IP used?

Code:
{
"nodename": "node2",
"version": 5,
"cluster": { "name": "outbank", "version": 3, "nodes": 3, "quorate": 0 },
"nodelist": {
  "node1": { "id": 1, "online": 1, "ip": "162.xx.xx.100"},
  "node2": { "id": 2, "online": 1, "ip": "162.xx.xx.101"},
  "node3": { "id": 3, "online": 1, "ip": "162.xx.xx.102"}
  }
}
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!