[SOLVED] Cannot link nodes in Cluster

Kornelius777

Member
Nov 6, 2022
33
1
8
Dear all,

yesterday, I tried several times to link two brand new nodes via Cluster.
However - the Cluster Node always became unresponsive, pveproxy failed and couldn't be restarted afterwards. Even reboots didn't help.
At least five times, I re-installed proxmox from scatch on both nodes.

What did I do?

On both nodes:
Installed proxmox
Added the No-Subscription Repo (disabled ceph and pro repos)
Did a dist-upgrade
rebooted
Exchanged ssh keys both ways
Created a Cluster on Node 1
Joined the Cluster with Node 2

...and from that very moment on, Node 1 couldn't be used any more. pveproxy just died. pvecm updatecerts -f wouldn't work, no luck whatever I tried.

Is there any step which I forgot?
Anything that is required to get the nodes to communicate?

Is anybody able to confirm that there might be a problem?

I'm curious and eager for help.

Kind regards,
 
Last edited:
Hi,

Did you check the syslog during the joining cluster? You can use `journalctl -f` to see the syslog for any interesting error or warning!

Exchanged ssh keys both ways
Why you changed the SSH keys?

Does the Proxmox VE servers have the same version?
 
I didn't change the ssh keys. I EX-changed the keys, so that passwordless logins are possible. authorized_keys...

And... Yes, I checked the journal. However, nothing made it clear why the joining process would be interrupted (EACH time!) and why pveproxy suddenly died.
Maybe you guys could try to reconstruct/reproduce this behaviour? Just take two brandnew nodes and join them into a cluster?
Does it work on your side?

Kind regards,
 
Does it work on your side?
Sure - a few of the users here are using a cluster ;-)

A pitfall but just a random guess: make sure that /etc/hosts has the same identical content on all nodes before trying to do anything. "ping pve1" and "ping pve2" must work on both nodes successfully. Yes, four(!) "ping" test calls.

Use static addresses. No surprises via Dhcp please. Post /etc/network/interfaces from both nodes if in doubt.

You probably know the documentation? https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pvecm
 
Sure - a few of the users here are using a cluster ;-)
Well... THAT is beyond doubt.
This isn't my first proxmox setup, either...

Question is: Can somebody please confirm that it is possible to create a cluster with the LATEST upgrades?

Kind regards,
 
Still not able to join the cluster.
On the Cluster node, I see:
Code:
Oct 31 10:34:04 proxmox-2 pmxcfs[3752069]: [dcdb] notice: members: 1/3752069, 2/3119651
Oct 31 10:34:04 proxmox-2 pmxcfs[3752069]: [dcdb] notice: starting data syncronisation
Oct 31 10:34:04 proxmox-2 pmxcfs[3752069]: [status] notice: members: 1/3752069, 2/3119651
Oct 31 10:34:04 proxmox-2 pmxcfs[3752069]: [status] notice: starting data syncronisation
Oct 31 10:34:04 proxmox-2 pmxcfs[3752069]: [dcdb] notice: received sync request (epoch 1/3752069/00000002)
Oct 31 10:34:04 proxmox-2 pmxcfs[3752069]: [status] notice: received sync request (epoch 1/3752069/00000002)
Oct 31 10:34:04 proxmox-2 pmxcfs[3752069]: [dcdb] notice: received all states
Oct 31 10:34:04 proxmox-2 pmxcfs[3752069]: [dcdb] notice: leader is 1/3752069
Oct 31 10:34:04 proxmox-2 pmxcfs[3752069]: [dcdb] notice: synced members: 1/3752069
Oct 31 10:34:04 proxmox-2 pmxcfs[3752069]: [dcdb] notice: start sending inode updates
Oct 31 10:34:04 proxmox-2 pmxcfs[3752069]: [dcdb] notice: sent all (64) updates
Oct 31 10:34:04 proxmox-2 pmxcfs[3752069]: [dcdb] notice: all data is up to date
Oct 31 10:34:05 proxmox-2 pmxcfs[3752069]: [status] notice: received all states
Oct 31 10:34:05 proxmox-2 pmxcfs[3752069]: [status] notice: all data is up to date
Oct 31 10:34:06 proxmox-2 corosync[3752078]:   [TOTEM ] Retransmit List: 17 18 19
Oct 31 10:34:06 proxmox-2 corosync[3752078]:   [TOTEM ] Retransmit List: 17 18 19 1d
Oct 31 10:34:06 proxmox-2 corosync[3752078]:   [TOTEM ] Retransmit List: 17 18 19 1d
Oct 31 10:34:07 proxmox-2 corosync[3752078]:   [TOTEM ] Retransmit List: 17 18 19 1d
Oct 31 10:34:07 proxmox-2 corosync[3752078]:   [TOTEM ] Retransmit List: 17 18 19 1d
And from this moment on, corosync goes wild.

The joining node just collapses, pveproxy doesn't respond any more

Any help available?
 
Any help available?
Let's start from the beginning, even if you think something is redundant or obvious. From before trying to join, if possible (what is the current state?):
  • just confirm both nodes are on the same physical network; how are they physically connected?
  • give us the output of pveversion ; uname -a of both computers
  • give us the output of cat /etc/hosts of both computers
  • give us the output of cat /etc/network/interfaces of both computers
  • give us the output of ip address show; ip route show of both computers
And if you tried to join already additionally:
  • ls -Al /etc/pve/corosync.conf ; ls -Al /etc/corosync/
  • pvecm status ; corosync-cfgtool -s ; corosync-cfgtool -n

Try to "ssh root@thenameoftheothernode" from both sides. On the very first attempt you'll get the known fingerprint hint. On the second call it must just work. With the host's name, not the IP address. In both directions.

Apart from the GUI you can join an already created cluster with the second node by pvecm add 192.0.2.1, with the IP address of the node with the instantiated cluster. (For some unclear reason my personal notes told me to use an IP address at this step, not the hostname - but it should work with the hostnames also...)


Creating a cluster is no magic, but there are some details that must fit from the beginning, e.g. the names of the hosts must be known by each other (and be static, do not change them after the fact).
 
I just now finished recreating the joining node from scratch, since the Joining Node merely collapsed.

Both nodes are on the same VLAN. Testing Multicast with iperf went through without any problem.

Cluster Node:
Code:
pve-manager/8.2.7/3e0176e6bb2ade3b (running kernel: 6.8.12-2-pve)
Linux proxmox-2 6.8.12-2-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-2 (2024-09-05T10:03Z) x86_64 GNU/Linux

Joining node:
Code:
pve-manager/8.2.7/3e0176e6bb2ade3b (running kernel: 6.8.12-2-pve)
Linux proxmox-1 6.8.12-2-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-2 (2024-09-05T10:03Z) x86_64 GNU/Linux

Cluster Node:
Code:
root@proxmox-2:~# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
10.10.1.72 proxmox-2.local proxmox-2

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

Joining node:
Code:
root@proxmox-1:~# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
10.10.1.71 proxmox-1.local proxmox-1

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

Cluster Node:
Code:
root@proxmox-2:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface enp1s0 inet manual

auto vmbr0
iface vmbr0 inet static
    address 10.10.1.72/24
    gateway 10.10.1.1
    bridge-ports enp1s0
    bridge-stp off
    bridge-fd 0

iface vmbr0 inet6 static
    address fdc0:ffee:dad:1::72/64
    gateway fdc0:ffee:dad:1::1

source /etc/network/interfaces.d/*

Joining node:
Code:
root@proxmox-1:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface enp1s0 inet manual

auto vmbr0
iface vmbr0 inet static
    address 10.10.1.71/24
    gateway 10.10.1.1
    bridge-ports enp1s0
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vids 2-4094

iface vmbr0 inet6 static
    address fdc0:ffee:dad:1::71/64
    gateway fdc0:ffee:dad:1::1

source /etc/network/interfaces.d/*

Cluster node:
Code:
root@proxmox-2:~# ip address show; ip route show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether fc:3f:db:11:69:35 brd ff:ff:ff:ff:ff:ff
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fc:3f:db:11:69:35 brd ff:ff:ff:ff:ff:ff
    inet 10.10.1.72/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fdc0:ffee:dad:1::72/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::fe3f:dbff:fe11:6935/64 scope link
       valid_lft forever preferred_lft forever
4: veth403i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr403i0 state UP group default qlen 1000
    link/ether fe:a3:ec:bb:31:e1 brd ff:ff:ff:ff:ff:ff link-netnsid 0
5: vmbr0v4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fc:3f:db:11:69:35 brd ff:ff:ff:ff:ff:ff
6: enp1s0.4@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0v4 state UP group default qlen 1000
    link/ether fc:3f:db:11:69:35 brd ff:ff:ff:ff:ff:ff
7: fwbr403i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 36:a1:dd:5e:8f:1b brd ff:ff:ff:ff:ff:ff
8: fwpr403p0@fwln403i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0v4 state UP group default qlen 1000
    link/ether 8a:eb:b5:a4:14:05 brd ff:ff:ff:ff:ff:ff
9: fwln403i0@fwpr403p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr403i0 state UP group default qlen 1000
    link/ether 36:a1:dd:5e:8f:1b brd ff:ff:ff:ff:ff:ff
10: veth404i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0v4 state UP group default qlen 1000
    link/ether fe:22:10:eb:00:2e brd ff:ff:ff:ff:ff:ff link-netnsid 1
11: veth401i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0v4 state UP group default qlen 1000
    link/ether fe:ed:8d:0b:38:e1 brd ff:ff:ff:ff:ff:ff link-netnsid 2
12: veth117i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr117i0 state UP group default qlen 1000
    link/ether fe:bb:b1:33:26:4d brd ff:ff:ff:ff:ff:ff link-netnsid 3
13: fwbr117i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 96:2b:91:e4:8b:85 brd ff:ff:ff:ff:ff:ff
14: fwpr117p0@fwln117i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 72:36:4f:45:6c:2b brd ff:ff:ff:ff:ff:ff
15: fwln117i0@fwpr117p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr117i0 state UP group default qlen 1000
    link/ether 96:2b:91:e4:8b:85 brd ff:ff:ff:ff:ff:ff
16: veth501i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr501i0 state UP group default qlen 1000
    link/ether fe:62:23:02:57:1a brd ff:ff:ff:ff:ff:ff link-netnsid 4
17: vmbr0v5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fc:3f:db:11:69:35 brd ff:ff:ff:ff:ff:ff
18: enp1s0.5@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0v5 state UP group default qlen 1000
    link/ether fc:3f:db:11:69:35 brd ff:ff:ff:ff:ff:ff
19: fwbr501i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 6e:de:03:e3:a2:40 brd ff:ff:ff:ff:ff:ff
20: fwpr501p0@fwln501i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0v5 state UP group default qlen 1000
    link/ether da:ec:7f:86:ab:1a brd ff:ff:ff:ff:ff:ff
21: fwln501i0@fwpr501p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr501i0 state UP group default qlen 1000
    link/ether 6e:de:03:e3:a2:40 brd ff:ff:ff:ff:ff:ff
22: veth910i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr910i0 state UP group default qlen 1000
    link/ether fe:ec:17:b0:36:b1 brd ff:ff:ff:ff:ff:ff link-netnsid 5
23: vmbr0v99: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fc:3f:db:11:69:35 brd ff:ff:ff:ff:ff:ff
24: enp1s0.99@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0v99 state UP group default qlen 1000
    link/ether fc:3f:db:11:69:35 brd ff:ff:ff:ff:ff:ff
25: fwbr910i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 2a:80:22:bc:03:04 brd ff:ff:ff:ff:ff:ff
26: fwpr910p0@fwln910i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0v99 state UP group default qlen 1000
    link/ether 46:83:76:ca:da:84 brd ff:ff:ff:ff:ff:ff
27: fwln910i0@fwpr910p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr910i0 state UP group default qlen 1000
    link/ether 2a:80:22:bc:03:04 brd ff:ff:ff:ff:ff:ff
28: tap930i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0v7 state UNKNOWN group default qlen 1000
    link/ether 12:55:72:7e:32:c3 brd ff:ff:ff:ff:ff:ff
29: vmbr0v7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fc:3f:db:11:69:35 brd ff:ff:ff:ff:ff:ff
30: enp1s0.7@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0v7 state UP group default qlen 1000
    link/ether fc:3f:db:11:69:35 brd ff:ff:ff:ff:ff:ff
31: veth109i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr109i0 state UP group default qlen 1000
    link/ether fe:d0:b0:47:5f:31 brd ff:ff:ff:ff:ff:ff link-netnsid 6
32: fwbr109i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 1a:1b:35:d4:78:f2 brd ff:ff:ff:ff:ff:ff
33: fwpr109p0@fwln109i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 72:bb:22:19:96:8d brd ff:ff:ff:ff:ff:ff
34: fwln109i0@fwpr109p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr109i0 state UP group default qlen 1000
    link/ether 1a:1b:35:d4:78:f2 brd ff:ff:ff:ff:ff:ff
default via 10.10.1.1 dev vmbr0 proto kernel onlink
10.10.1.0/24 dev vmbr0 proto kernel scope link src 10.10.1.72

Joining node:
Code:
root@proxmox-1:~# ip address show; ip route show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether ec:8e:b5:6d:a2:19 brd ff:ff:ff:ff:ff:ff
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ec:8e:b5:6d:a2:19 brd ff:ff:ff:ff:ff:ff
    inet 10.10.1.71/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fdc0:ffee:dad:1::71/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::ee8e:b5ff:fe6d:a219/64 scope link
       valid_lft forever preferred_lft forever
4: veth601i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr601i0 state UP group default qlen 1000
    link/ether fe:9f:b1:87:dd:1f brd ff:ff:ff:ff:ff:ff link-netnsid 0
5: vmbr0v6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ec:8e:b5:6d:a2:19 brd ff:ff:ff:ff:ff:ff
6: enp1s0.6@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0v6 state UP group default qlen 1000
    link/ether ec:8e:b5:6d:a2:19 brd ff:ff:ff:ff:ff:ff
7: fwbr601i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 86:f7:a7:2b:4b:df brd ff:ff:ff:ff:ff:ff
8: fwpr601p0@fwln601i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0v6 state UP group default qlen 1000
    link/ether b2:18:5c:0e:3d:be brd ff:ff:ff:ff:ff:ff
9: fwln601i0@fwpr601p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr601i0 state UP group default qlen 1000
    link/ether 86:f7:a7:2b:4b:df brd ff:ff:ff:ff:ff:ff
default via 10.10.1.1 dev vmbr0 proto kernel onlink
10.10.1.0/24 dev vmbr0 proto kernel scope link src 10.10.1.71

Passwordless ssh works from/to both nodes

Yes, you personal notes are correct - I tried to join using the hostname and failed.
After I used the IP address, allegedly the cluster was built, however afterwards trouble arose.

Thank you for looking into this!
 
Yes, you personal notes are correct - I tried to join using the hostname and failed.
Thank you for confirming my written notes :)

After I used the IP address, allegedly the cluster was built, however afterwards trouble arose.
Fine, the cluster was build. You could immediately verify this with pvecm status.

etc/hosts: I would expect both files to contain the exact same information. You can just create one of these and "scp" the file to the other node - no changes/adjustments necessary. Does "ping proxmox-2" work on proxmox-2? Does "ssh proxmox-2" work on both machines?

(Nearly) the same for /etc/network/interfaces: with the obvious exception of assigned IP addresses the structure, the naming and the specification of a bridge should be exactly the same.

To be clear: I do not see an actual problem with your configuration, but something is producing trouble...


PS: we didn't talk yet about Quorum (search for it) and the implicit need of having one. Just mentioning it because administration is not possible for a two node cluster with one node having a network malfunction. Lookup "pvecm expected 1" for troubleshooting.
 
Well... Before I think about my qdevice, I try to generate a cluster.

I'm willing to deliver any log available during the Cluster Building Process. Which one would be beneficial?
 
Well... I tried it again.
Here is the log of the joining node:

(see attachment log.txt)

...and the log of the cluster node: log2.txt
 

Attachments

  • log.txt
    35.1 KB · Views: 6
  • log2.txt
    36.2 KB · Views: 3
Last edited:
Ceph is also an enterprise repo...
True. BUT there is also a second ceph repo no-subscription one.

This is the output for mine:
Code:
cat /etc/apt/sources.list.d/ceph.list
# deb http://download.proxmox.com/debian/ceph-quincy bookworm enterprise
deb http://download.proxmox.com/debian/ceph-quincy bookworm no-subscription
As you can see enterprise is disabled but no-subscription is NOT.
 
Well.
I added the repo, (afterwards, did an apt-get update) and no update has been offered to me. Obviously, because I don't use Ceph.
Doesn't look like this leads me anywhere...
 
I'd do the following, if I were you:

  1. Reinstall node from scratch. Get it working, add no-subscriptions repos. In GUI under Updates; Refresh & then >_ Upgrade. Then reboot.
  2. Repeat all of above on other node.
  3. Make sure both nodes are pingable from one to the other.
  4. Then try clustering.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!