1 of my 2 node doesn't show the gui. it's always server offline. 100% sure i did something wrong and i dont know what. Can you analyse this or help me

cyqpann

Member
Nov 18, 2021
37
0
11
44
I have a wierd problem since 3 month with one of my 2 nodes. i've done something wrong and i don't know what happen honestly.

alpha and bravo
alpha: 10.0.100.10/24
bravo: 10.0.100.11/24
opnsense : 10.0.100.1/24


problem is when i go on alpha and try to go on bravo from the gui, it always say server offline.
i cannot do anything except the bravo shell is availaible.


msedge_DMH2ZG7sQj.png




msedge_XICIXpwVH0.png

msedge_Q3hanxylcR.png

if i press Ok, i can go on bravo and shell and Certificaticate and upload a cerrtificates but that about it

msedge_lze5m0V9Zp.png

i can use mobaxterm to ssh into alpha and bravo and i can ssh root@10.0.100.10 and 10.0.100.11 from alpha gui but it always offline.

From alpha shell, i can ssh root@10.0.100.11.


Quorum seem okay. My son on opera say it a ssl cert problem, he can see the message, but i cannot see it on on chrome and edge.
i also try brave and same problem.


here the shell basic test i've done because i dont know what else to do and it non sense to reinstall OS and lose everything from bravo.

Code:
Linux alpha 6.8.12-10-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-10 (2025-04-18T07:39Z) x86_64

The programs included with the Debian GNU/Linux system are free software;

the exact distribution terms for each program are described in the

individual files in /usr/share/doc/*/copyright.



Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent

permitted by applicable law.

Last login: Tue Jul 15 19:14:25 EDT 2025 from 10.0.100.11 on pts/7

root@alpha:~# ping 10.0.100.10

PING 10.0.100.10 (10.0.100.10) 56(84) bytes of data.

64 bytes from 10.0.100.10: icmp_seq=1 ttl=64 time=0.031 ms

^C

--- 10.0.100.10 ping statistics ---

1 packets transmitted, 1 received, 0% packet loss, time 0ms

rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms

root@alpha:~# ping alpha.lafamilleparfaite.com

PING alpha.lafamilleparfaite.com (10.0.100.10) 56(84) bytes of data.

64 bytes from alpha.lafamilleparfaite.com (10.0.100.10): icmp_seq=1 ttl=64 time=0.020 ms

^C

--- alpha.lafamilleparfaite.com ping statistics ---

1 packets transmitted, 1 received, 0% packet loss, time 0ms

rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms

root@alpha:~# ping bravo.lafamilleparfaite.com

PING bravo.lafamilleparfaite.com (10.0.100.11) 56(84) bytes of data.

64 bytes from bravo.lafamilleparfaite.com (10.0.100.11): icmp_seq=1 ttl=64 time=0.247 ms

^C

--- bravo.lafamilleparfaite.com ping statistics ---

1 packets transmitted, 1 received, 0% packet loss, time 0ms

rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms

root@alpha:~# ping 10.0.100.11

PING 10.0.100.11 (10.0.100.11) 56(84) bytes of data.

64 bytes from 10.0.100.11: icmp_seq=1 ttl=64 time=0.232 ms

^C

--- 10.0.100.11 ping statistics ---

1 packets transmitted, 1 received, 0% packet loss, time 0ms

rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms

root@alpha:~# ssh root@10.0.100.11

Linux bravo 6.8.12-5-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-5 (2024-12-03T10:26Z) x86_64



The programs included with the Debian GNU/Linux system are free software;

the exact distribution terms for each program are described in the

individual files in /usr/share/doc/*/copyright.



Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent

permitted by applicable law.

Last login: Tue Jul 15 19:16:33 2025

root@bravo:~# ping google.com

^C

root@bravo:~# ping 10.0.100.10

PING 10.0.100.10 (10.0.100.10) 56(84) bytes of data.

64 bytes from 10.0.100.10: icmp_seq=1 ttl=64 time=0.181 ms

^C

--- 10.0.100.10 ping statistics ---

1 packets transmitted, 1 received, 0% packet loss, time 0ms

rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms

root@bravo:~# ping alpha.lafamilleparfaite.com

^C

root@bravo:~# ping bravo.lafamilleparfaite.com

PING bravo.lafamilleparfaite.com (10.0.100.11) 56(84) bytes of data.

64 bytes from bravo.lafamilleparfaite.com (10.0.100.11): icmp_seq=1 ttl=64 time=0.038 ms

64 bytes from bravo.lafamilleparfaite.com (10.0.100.11): icmp_seq=2 ttl=64 time=0.015 ms

^C

--- bravo.lafamilleparfaite.com ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 1040ms

rtt min/avg/max/mdev = 0.015/0.026/0.038/0.011 ms

root@bravo:~#

root@bravo:~#

root@bravo:~# cat /etc/hosts

127.0.0.1 localhost.localdomain localhost

10.0.100.11 bravo.lafamilleparfaite.com bravo



# The following lines are desirable for IPv6 capable hosts



::1     ip6-localhost ip6-loopback

fe00::0 ip6-localnet

ff00::0 ip6-mcastprefix

ff02::1 ip6-allnodes

ff02::2 ip6-allrouters

ff02::3 ip6-allhosts

root@bravo:~# ssh root@10.0.100.10

Linux alpha 6.8.12-10-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-10 (2025-04-18T07:39Z) x86_64



The programs included with the Debian GNU/Linux system are free software;

the exact distribution terms for each program are described in the

individual files in /usr/share/doc/*/copyright.



Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent

permitted by applicable law.

Last login: Tue Jul 15 19:16:41 2025

root@alpha:~# cat /etc/hosts

127.0.0.1 localhost.localdomain localhost

10.0.100.10 alpha.lafamilleparfaite.com alpha

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
root@alpha:~# ^C
root@alpha:~#

then followed someone else who seem to have similar problem.

i did on every node: alpha and bravo:
systemctl restart pvedaemon pveproxy
235 pvecm updatecerts -F
236 systemctl restart pvedaemon pveproxy

Also is it normal that i cant ping my opensense on 10.0.100.1 from bravo but alpha can ping 10.0.100.1.

Code:
root@bravo:~# ping 10.0.100.1
PING 10.0.100.1 (10.0.100.1) 56(84) bytes of data.
^C
--- 10.0.100.1 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1011ms

root@bravo:~# ssh root@10.0.100.10
Linux alpha 6.8.12-10-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-10 (2025-04-18T07:39Z) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Tue Jul 15 19:19:04 2025 from 10.0.100.11
root@alpha:~# ping 10.0.100.1
PING 10.0.100.1 (10.0.100.1) 56(84) bytes of data.
64 bytes from 10.0.100.1: icmp_seq=1 ttl=64 time=0.189 ms
^C
--- 10.0.100.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms
root@alpha:~#

im not quite sure what to check to bring back bravo with every config i had when i setup both node a year ago.

I really think my mistake was to try adding acme to datacenter or on alpha and i messed up everything by deleting a cert since everything was handle by nginx proxy manager for my front end. . at this moment i lost access to bravo and the gui is missing me like the one on alpha.

Since this time im stuck like this and this prevent me from creating my live replica on bravo.

Do you need more log more screenshot, more anything.

Anything that can help me fix this is welcome.

thanks a lot !
 
Post this info of both hosts in CODE tags:
  • Of both hosts:
    • cat /etc/hosts
    • pvecm status
    • corosync-cfgtool -s
    • ip addr
    • wget --no-check-certificate -qO - https://$HOSTNAME:8006 | grep 'Proxmox Virtual Environment' 2> /dev/null
  • cat /etc/pve/.members
 
Post this info of both hosts in CODE tags:
  • Of both hosts:
    • cat /etc/hosts
    • pvecm status
    • corosync-cfgtool -s
    • ip addr
    • wget --no-check-certificate -qO - https://$HOSTNAME:8006 | grep 'Proxmox Virtual Environment' 2> /dev/null
  • cat /etc/pve/.members

Alpha:

Code:
root@alpha:~# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
10.0.100.10 alpha.lafamilleparfaite.com alpha

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
root@alpha:~# pvecm status
Cluster information
-------------------
Name:             Genevieve
Config Version:   2
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Wed Jul 16 07:23:22 2025
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          0x00000001
Ring ID:          1.74
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   2
Highest expected: 2
Total votes:      2
Quorum:           2
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 fd10:0:0:150::10%31687 (local)
0x00000002          1 fd10:0:0:150::11%31687
root@alpha:~# corosync-cfgtool -s
Local node ID 1, transport knet
LINK ID 0 udp
        addr    = fd10:0:0:150::10%2
        status:
                nodeid:          1:     localhost
                nodeid:          2:     connected
root@alpha:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: eno3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 24:6e:96:0d:19:ac brd ff:ff:ff:ff:ff:ff
    altname enp7s0f0
3: enp65s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr150 state UP group default qlen 1000
    link/ether 00:0a:f7:45:ac:50 brd ff:ff:ff:ff:ff:ff
4: eno4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 24:6e:96:0d:19:ac brd ff:ff:ff:ff:ff:ff permaddr 24:6e:96:0d:19:ad
    altname enp7s0f1
5: enp65s0f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 00:0a:f7:45:ac:52 brd ff:ff:ff:ff:ff:ff
6: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 24:6e:96:0d:19:a8 brd ff:ff:ff:ff:ff:ff
    altname enp1s0f0
7: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 24:6e:96:0d:19:aa brd ff:ff:ff:ff:ff:ff
    altname enp1s0f1
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr100 state UP group default qlen 1000
    link/ether 24:6e:96:0d:19:ac brd ff:ff:ff:ff:ff:ff
9: bond0.50@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr50 state UP group default qlen 1000
    link/ether 24:6e:96:0d:19:ac brd ff:ff:ff:ff:ff:ff
10: vmbr50: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 24:6e:96:0d:19:ac brd ff:ff:ff:ff:ff:ff
    inet6 fe80::266e:96ff:fe0d:19ac/64 scope link
       valid_lft forever preferred_lft forever
11: bond0.60@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr60 state UP group default qlen 1000
    link/ether 24:6e:96:0d:19:ac brd ff:ff:ff:ff:ff:ff
12: vmbr60: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 24:6e:96:0d:19:ac brd ff:ff:ff:ff:ff:ff
    inet6 fd10::60:266e:96ff:fe0d:19ac/64 scope global dynamic mngtmpaddr
       valid_lft 86399sec preferred_lft 14399sec
    inet6 2001:470:b185:60:266e:96ff:fe0d:19ac/64 scope global dynamic mngtmpaddr
       valid_lft 86399sec preferred_lft 14399sec
    inet6 fe80::266e:96ff:fe0d:19ac/64 scope link
       valid_lft forever preferred_lft forever
13: bond0.70@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr70 state UP group default qlen 1000
    link/ether 24:6e:96:0d:19:ac brd ff:ff:ff:ff:ff:ff
14: vmbr70: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 24:6e:96:0d:19:ac brd ff:ff:ff:ff:ff:ff
    inet6 fd10::70:266e:96ff:fe0d:19ac/64 scope global dynamic mngtmpaddr
       valid_lft 86396sec preferred_lft 14396sec
    inet6 2001:470:b185:70:266e:96ff:fe0d:19ac/64 scope global dynamic mngtmpaddr
       valid_lft 86396sec preferred_lft 14396sec
    inet6 fe80::266e:96ff:fe0d:19ac/64 scope link
       valid_lft forever preferred_lft forever
15: bond0.80@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr80 state UP group default qlen 1000
    link/ether 24:6e:96:0d:19:ac brd ff:ff:ff:ff:ff:ff
16: vmbr80: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 24:6e:96:0d:19:ac brd ff:ff:ff:ff:ff:ff
    inet6 fd10::80:266e:96ff:fe0d:19ac/64 scope global dynamic mngtmpaddr
       valid_lft 86395sec preferred_lft 14395sec
    inet6 2001:470:b185:80:266e:96ff:fe0d:19ac/64 scope global dynamic mngtmpaddr
       valid_lft 86395sec preferred_lft 14395sec
    inet6 fe80::266e:96ff:fe0d:19ac/64 scope link
       valid_lft forever preferred_lft forever
17: bond0.90@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr90 state UP group default qlen 1000
    link/ether 24:6e:96:0d:19:ac brd ff:ff:ff:ff:ff:ff
18: vmbr90: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 24:6e:96:0d:19:ac brd ff:ff:ff:ff:ff:ff
    inet6 fe80::266e:96ff:fe0d:19ac/64 scope link
       valid_lft forever preferred_lft forever
19: vmbr100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 24:6e:96:0d:19:ac brd ff:ff:ff:ff:ff:ff
    inet 10.0.100.10/24 scope global vmbr100
       valid_lft forever preferred_lft forever
    inet6 fd10:0:0:100::10/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::266e:96ff:fe0d:19ac/64 scope link
       valid_lft forever preferred_lft forever
20: vmbr150: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:0a:f7:45:ac:50 brd ff:ff:ff:ff:ff:ff
    inet 10.0.150.10/24 scope global vmbr150
       valid_lft forever preferred_lft forever
    inet6 fd10:0:0:150::10/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::20a:f7ff:fe45:ac50/64 scope link
       valid_lft forever preferred_lft forever
25: veth1005i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr100 state UP group default qlen 1000
    link/ether fe:d8:04:1a:fe:4e brd ff:ff:ff:ff:ff:ff link-netnsid 1
26: veth1020i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr100 state UP group default qlen 1000
    link/ether fe:f2:68:e1:fc:57 brd ff:ff:ff:ff:ff:ff link-netnsid 2
562: veth5264i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr5264i0 state UP group default qlen 1000
    link/ether fe:42:4a:79:c1:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
563: fwbr5264i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether c6:7c:08:45:1c:18 brd ff:ff:ff:ff:ff:ff
564: fwpr5264p0@fwln5264i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr50 state UP group default qlen 1000
    link/ether 22:9b:10:05:6e:2f brd ff:ff:ff:ff:ff:ff
565: fwln5264i0@fwpr5264p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr5264i0 state UP group default qlen 1000
    link/ether c6:7c:08:45:1c:18 brd ff:ff:ff:ff:ff:ff
574: veth6207i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr6207i0 state UP group default qlen 1000
    link/ether fe:f9:72:2a:bb:84 brd ff:ff:ff:ff:ff:ff link-netnsid 8
575: fwbr6207i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e2:59:b4:11:16:90 brd ff:ff:ff:ff:ff:ff
576: fwpr6207p0@fwln6207i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr60 state UP group default qlen 1000
    link/ether 82:88:fa:3a:14:e7 brd ff:ff:ff:ff:ff:ff
577: fwln6207i0@fwpr6207p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr6207i0 state UP group default qlen 1000
    link/ether e2:59:b4:11:16:90 brd ff:ff:ff:ff:ff:ff
582: veth5233i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr5233i0 state UP group default qlen 1000
    link/ether fe:c4:21:da:5b:b3 brd ff:ff:ff:ff:ff:ff link-netnsid 5
583: fwbr5233i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ae:b7:16:01:d9:87 brd ff:ff:ff:ff:ff:ff
584: fwpr5233p0@fwln5233i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr50 state UP group default qlen 1000
    link/ether b6:e4:de:4c:a1:8a brd ff:ff:ff:ff:ff:ff
585: fwln5233i0@fwpr5233p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr5233i0 state UP group default qlen 1000
    link/ether ae:b7:16:01:d9:87 brd ff:ff:ff:ff:ff:ff
586: veth50267i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr50267i0 state UP group default qlen 1000
    link/ether fe:f1:93:9d:b8:63 brd ff:ff:ff:ff:ff:ff link-netnsid 9
587: fwbr50267i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fe:b1:f0:5a:e6:23 brd ff:ff:ff:ff:ff:ff
588: fwpr50267p0@fwln50267i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr50 state UP group default qlen 1000
    link/ether c2:71:90:00:50:2e brd ff:ff:ff:ff:ff:ff
589: fwln50267i0@fwpr50267p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr50267i0 state UP group default qlen 1000
    link/ether fe:b1:f0:5a:e6:23 brd ff:ff:ff:ff:ff:ff
592: tap1021i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr100 state UNKNOWN group default qlen 1000
    link/ether e2:8e:79:a5:65:c8 brd ff:ff:ff:ff:ff:ff
93: veth5229i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr5229i0 state UP group default qlen 1000
    link/ether fe:88:4f:53:c4:94 brd ff:ff:ff:ff:ff:ff link-netnsid 4
94: fwbr5229i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 32:9c:42:eb:d5:c4 brd ff:ff:ff:ff:ff:ff
95: fwpr5229p0@fwln5229i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr50 state UP group default qlen 1000
    link/ether c2:9e:21:ee:bf:50 brd ff:ff:ff:ff:ff:ff
96: fwln5229i0@fwpr5229p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr5229i0 state UP group default qlen 1000
    link/ether 32:9c:42:eb:d5:c4 brd ff:ff:ff:ff:ff:ff
373: veth52234i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr52234i0 state UP group default qlen 1000
    link/ether fe:4c:fa:67:13:e9 brd ff:ff:ff:ff:ff:ff link-netnsid 15
374: fwbr52234i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e6:ab:55:48:f6:fb brd ff:ff:ff:ff:ff:ff
375: fwpr52234p0@fwln52234i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr50 state UP group default qlen 1000
    link/ether 96:58:c9:c4:a2:3c brd ff:ff:ff:ff:ff:ff
376: fwln52234i0@fwpr52234p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr52234i0 state UP group default qlen 1000
    link/ether e6:ab:55:48:f6:fb brd ff:ff:ff:ff:ff:ff
385: veth1115i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr100 state UP group default qlen 1000
    link/ether fe:47:06:ae:e7:dd brd ff:ff:ff:ff:ff:ff link-netnsid 3
386: veth6206i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr6206i0 state UP group default qlen 1000
    link/ether fe:4d:31:17:c9:18 brd ff:ff:ff:ff:ff:ff link-netnsid 6
387: fwbr6206i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 76:41:88:88:9b:8d brd ff:ff:ff:ff:ff:ff
388: fwpr6206p0@fwln6206i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr60 state UP group default qlen 1000
    link/ether da:90:af:1c:a7:5a brd ff:ff:ff:ff:ff:ff
389: fwln6206i0@fwpr6206p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr6206i0 state UP group default qlen 1000
    link/ether 76:41:88:88:9b:8d brd ff:ff:ff:ff:ff:ff
root@alpha:~# wget --no-check-certificate -qO - https://$HOSTNAME:8006 | grep 'Proxmox Virtual Environment' 2> /dev/null
    <title>alpha - Proxmox Virtual Environment</title>
root@alpha:~# cat /etc/pve/.members
{
"nodename": "alpha",
"version": 4,
"cluster": { "name": "Genevieve", "version": 2, "nodes": 2, "quorate": 1 },
"nodelist": {
  "alpha": { "id": 1, "online": 1, "ip": "10.0.100.10"},
  "bravo": { "id": 2, "online": 1, "ip": "10.0.100.11"}
  }
}
 
Post this info of both hosts in CODE tags:
  • Of both hosts:
    • cat /etc/hosts
    • pvecm status
    • corosync-cfgtool -s
    • ip addr
    • wget --no-check-certificate -qO - https://$HOSTNAME:8006 | grep 'Proxmox Virtual Environment' 2> /dev/null
  • cat /etc/pve/.members
Bravo
Code:
Last login: Tue Jul 15 23:16:33 2025
root@bravo:~# cat /etc/hosts
pvecm status
corosync-cfgtool -s
ip addr
wget --no-check-certificate -qO - https://$HOSTNAME:8006 | grep 'Proxmox Virtual Environment' 2> /dev/null
cat /etc/pve/.members
127.0.0.1 localhost.localdomain localhost
10.0.100.11 bravo.lafamilleparfaite.com bravo

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
Cluster information
-------------------
Name:             Genevieve
Config Version:   2
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Wed Jul 16 07:25:48 2025
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          0x00000002
Ring ID:          1.74
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   2
Highest expected: 2
Total votes:      2
Quorum:           2
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 fd10:0:0:150::10%32261
0x00000002          1 fd10:0:0:150::11%32261 (local)
Local node ID 2, transport knet
LINK ID 0 udp
        addr    = fd10:0:0:150::11%2
        status:
                nodeid:          1:     connected
                nodeid:          2:     localhost
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether b8:ca:3a:ee:48:77 brd ff:ff:ff:ff:ff:ff
    altname enp1s0f0
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether b8:ca:3a:ee:48:79 brd ff:ff:ff:ff:ff:ff
    altname enp1s0f1
4: eno3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether b8:ca:3a:ee:48:7b brd ff:ff:ff:ff:ff:ff
    altname enp1s0f2
5: eno4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether b8:ca:3a:ee:48:7b brd ff:ff:ff:ff:ff:ff permaddr b8:ca:3a:ee:48:7d
    altname enp1s0f3
6: enp66s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr150 state UP group default qlen 1000
    link/ether 00:0a:f7:41:6f:d0 brd ff:ff:ff:ff:ff:ff
7: enp66s0f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 00:0a:f7:41:6f:d2 brd ff:ff:ff:ff:ff:ff
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr100 state UP group default qlen 1000
    link/ether b8:ca:3a:ee:48:7b brd ff:ff:ff:ff:ff:ff
9: bond0.50@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr50 state UP group default qlen 1000
    link/ether b8:ca:3a:ee:48:7b brd ff:ff:ff:ff:ff:ff
10: vmbr50: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether b8:ca:3a:ee:48:7b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::baca:3aff:feee:487b/64 scope link
       valid_lft forever preferred_lft forever
11: bond0.60@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr60 state UP group default qlen 1000
    link/ether b8:ca:3a:ee:48:7b brd ff:ff:ff:ff:ff:ff
12: vmbr60: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether b8:ca:3a:ee:48:7b brd ff:ff:ff:ff:ff:ff
    inet6 fd10::60:baca:3aff:feee:487b/64 scope global dynamic mngtmpaddr
       valid_lft 86391sec preferred_lft 14391sec
    inet6 2001:470:b185:60:baca:3aff:feee:487b/64 scope global dynamic mngtmpaddr
       valid_lft 86391sec preferred_lft 14391sec
    inet6 fe80::baca:3aff:feee:487b/64 scope link
       valid_lft forever preferred_lft forever
13: bond0.70@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr70 state UP group default qlen 1000
    link/ether b8:ca:3a:ee:48:7b brd ff:ff:ff:ff:ff:ff
14: vmbr70: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether b8:ca:3a:ee:48:7b brd ff:ff:ff:ff:ff:ff
    inet6 fd10::70:baca:3aff:feee:487b/64 scope global dynamic mngtmpaddr
       valid_lft 86399sec preferred_lft 14399sec
    inet6 2001:470:b185:70:baca:3aff:feee:487b/64 scope global dynamic mngtmpaddr
       valid_lft 86399sec preferred_lft 14399sec
    inet6 fe80::baca:3aff:feee:487b/64 scope link
       valid_lft forever preferred_lft forever
15: bond0.80@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr80 state UP group default qlen 1000
    link/ether b8:ca:3a:ee:48:7b brd ff:ff:ff:ff:ff:ff
16: vmbr80: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether b8:ca:3a:ee:48:7b brd ff:ff:ff:ff:ff:ff
    inet6 fd10::80:baca:3aff:feee:487b/64 scope global dynamic mngtmpaddr
       valid_lft 86399sec preferred_lft 14399sec
    inet6 2001:470:b185:80:baca:3aff:feee:487b/64 scope global dynamic mngtmpaddr
       valid_lft 86399sec preferred_lft 14399sec
    inet6 fe80::baca:3aff:feee:487b/64 scope link
       valid_lft forever preferred_lft forever
17: bond0.90@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr90 state UP group default qlen 1000
    link/ether b8:ca:3a:ee:48:7b brd ff:ff:ff:ff:ff:ff
18: vmbr90: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether b8:ca:3a:ee:48:7b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::baca:3aff:feee:487b/64 scope link
       valid_lft forever preferred_lft forever
19: vmbr100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether b8:ca:3a:ee:48:7b brd ff:ff:ff:ff:ff:ff
    inet 10.0.100.11/24 scope global vmbr100
       valid_lft forever preferred_lft forever
    inet6 fd10:0:0:100::11/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::baca:3aff:feee:487b/64 scope link
       valid_lft forever preferred_lft forever
20: vmbr150: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:0a:f7:41:6f:d0 brd ff:ff:ff:ff:ff:ff
    inet 10.0.150.11/24 scope global vmbr150
       valid_lft forever preferred_lft forever
    inet6 fd10:0:0:150::11/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::20a:f7ff:fe41:6fd0/64 scope link
       valid_lft forever preferred_lft forever
    <title>bravo - Proxmox Virtual Environment</title>
{
"nodename": "bravo",
"version": 6,
"cluster": { "name": "Genevieve", "version": 2, "nodes": 2, "quorate": 1 },
"nodelist": {
  "alpha": { "id": 1, "online": 1, "ip": "10.0.100.10"},
  "bravo": { "id": 2, "online": 1, "ip": "10.0.100.11"}
  }
}
root@bravo:~#

i had to do 2 post because there more than 16384 characters.
 
Missing the output of these commands in both nodes:
  • wget --no-check-certificate -qO - https://$HOSTNAME:8006 | grep 'Proxmox Virtual Environment' 2&gt; /dev/null
  • systemctl status pveproxy
Calls my attention that your corosync is using IPv6 instead of IPv4, although it's completely unrelated to your issue and is working perfectly.
 
Missing the output of these commands in both nodes:
  • wget --no-check-certificate -qO - https://$HOSTNAME:8006 | grep 'Proxmox Virtual Environment' 2&gt; /dev/null
  • systemctl status pveproxy
Calls my attention that your corosync is using IPv6 instead of IPv4, although it's completely unrelated to your issue and is working perfectly.
Missing the output of these commands in both nodes:
  • wget --no-check-certificate -qO - https://$HOSTNAME:8006 | grep 'Proxmox Virtual Environment' 2&amp;gt; /dev/null
  • systemctl status pveproxy
Calls my attention that your corosync is using IPv6 instead of IPv4, although it's completely unrelated to your issue and is working perfectly.

Yeah my bad, i throught it show in the log when i copied the command you asked me earlier. My bad, here they are.

Yeah ipv4.. ipv6...

Im not the one who installed the proxmox back in the time. I asked help on a random discord and a network engineer help me setup my labs. Since my isp doesnt offer ipv6 but only ipv4, he created me a tunnel with tunnel broker in EU. He explianed me that ipv6 is way better and easier with port management and all that stuffs but im' so new to everything that he lost me.... im starting to understand ipv4 and /8 /16 /24.. lol

So this guys did all the network with vlan into opnsense, updated my srv, updated my switch, did a nasty good job, but no doc and now i think he died because i have no more new from him and i dont see him in discord anymore.
Setting up a tunnel broker and asking my isp to have ipv6 was a mess because with my setup, i'm running 6-7 future business from home for family member since i had a free jackpot scrap metal junk hardware.


Im a really novice Ti, I'm learning the hard way debian and proxmox, security networking, devs ops and all that stuff.



so here the 2 commands i did but didnt copied over.
This is alpha:
Code:
root@alpha:~# wget --no-check-certificate -qO - https://$HOSTNAME:8006 | grep 'Proxmox Virtual Environment' 2&gt; /dev/null
[1] 53708
-bash: gt: command not found
-bash: /dev/null: Permission denied
grep: 2: No such file or directory
root@alpha:~# systemctl status pveproxy
● pveproxy.service - PVE API Proxy Server
     Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; preset: enabled)
     Active: active (running) since Tue 2025-07-15 18:28:42 EDT; 13h ago
    Process: 1839660 ExecReload=/usr/bin/pveproxy restart (code=exited, status=0/SUCCESS)
   Main PID: 43815 (pveproxy)
      Tasks: 4 (limit: 619007)
     Memory: 196.3M
        CPU: 16min 47.930s
     CGroup: /system.slice/pveproxy.service
             ├─  23351 "pveproxy worker"
             ├─  43815 pveproxy
             ├─  46797 "pveproxy worker"
             └─4153085 "pveproxy worker"

Jul 16 07:52:25 alpha pveproxy[4153085]: Clearing outdated entries from certificate cache
Jul 16 07:52:27 alpha pveproxy[23351]: Clearing outdated entries from certificate cache
Jul 16 07:52:41 alpha pveproxy[4145734]: Clearing outdated entries from certificate cache
Jul 16 07:53:04 alpha pveproxy[43815]: worker 4145734 finished
Jul 16 07:53:04 alpha pveproxy[43815]: starting 1 worker(s)
Jul 16 07:53:04 alpha pveproxy[43815]: worker 46797 started
Jul 16 07:53:04 alpha pveproxy[46792]: got inotify poll request in wrong process - disabling inotify
Jul 16 07:53:29 alpha pveproxy[23351]: Could not verify remote node certificate '47:07:C1:9B:DF:54:61:51:BE:D4:5E:10:EC:B>
Jul 16 07:53:41 alpha pveproxy[46797]: Clearing outdated entries from certificate cache
Jul 16 07:53:46 alpha pveproxy[46792]: worker exit



This is bravo.
Code:
root@bravo:~# wget --no-check-certificate -qO - https://$HOSTNAME:8006 | grep 'Proxmox Virtual Environment' 2&gt; /dev/null
[3] 2268282
-bash: gt: command not found
[2]   Exit 2                  wget --no-check-certificate -qO - https://$HOSTNAME:8006 | grep 'Proxmox Virtual Environment' 2
-bash: /dev/null: Permission denied
root@bravo:~# grep: 2: No such file or directory


root@bravo:~# systemctl status pveproxy
● pveproxy.service - PVE API Proxy Server
     Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; preset: enabled)
     Active: active (running) since Tue 2025-07-15 18:48:18 EDT; 13h ago
    Process: 2089511 ExecStartPre=/usr/bin/pvecm updatecerts --silent (code=exited, status=0/SUCCESS)
    Process: 2089513 ExecStart=/usr/bin/pveproxy start (code=exited, status=0/SUCCESS)
    Process: 2160893 ExecReload=/usr/bin/pveproxy restart (code=exited, status=0/SUCCESS)
   Main PID: 2089531 (pveproxy)
      Tasks: 4 (limit: 541593)
     Memory: 180.5M
        CPU: 16.257s
     CGroup: /system.slice/pveproxy.service
             ├─2089531 pveproxy
             ├─2160915 "pveproxy worker"
             ├─2160916 "pveproxy worker"
             └─2160917 "pveproxy worker"

Jul 16 00:00:16 bravo pveproxy[2089531]: starting 3 worker(s)
Jul 16 00:00:16 bravo pveproxy[2089531]: worker 2160915 started
Jul 16 00:00:16 bravo pveproxy[2089531]: worker 2160916 started
Jul 16 00:00:16 bravo pveproxy[2089531]: worker 2160917 started
Jul 16 00:00:21 bravo pveproxy[2089534]: worker exit
Jul 16 00:00:21 bravo pveproxy[2089533]: worker exit
Jul 16 00:00:21 bravo pveproxy[2089532]: worker exit
Jul 16 00:00:21 bravo pveproxy[2089531]: worker 2089533 finished
Jul 16 00:00:21 bravo pveproxy[2089531]: worker 2089534 finished
Jul 16 00:00:21 bravo pveproxy[2089531]: worker 2089532 finished
 
Last edited:
Sorry, pasted the wrong command. The right one is:
Code:
wget --no-check-certificate -qO - https://$HOSTNAME:8006 | grep 'Proxmox Virtual Environment' 2> /dev/null

Im not the one who installed the proxmox back in the time. My isp doesnt offer ipv6.
Someone from EU from remote installed both proxmox with my help and did the config with internal ipv6 tunnel broker and creating a tunnel to ipv6 in EU, thing that make me have a ping of like 250ms instead of the 4 ms from my isp. Unfortunaylt, he created the bond, the ipv6, i didn't get much explanation.
So the two nodes are in different locations at 250ms apart? That is an issue that will cause pmxcfs to fail eventually. Check requirements for cluster [1], nodes must be ~5ms max from each other.

[1] https://pve.proxmox.com/wiki/Cluster_Manager#pvecm_cluster_network
 
Sorry, pasted the wrong command. The right one is:
Code:
wget --no-check-certificate -qO - https://$HOSTNAME:8006 | grep 'Proxmox Virtual Environment' 2> /dev/null


So the two nodes are in different locations at 250ms apart? That is an issue that will cause pmxcfs to fail eventually. Check requirements for cluster [1], nodes must be ~5ms max from each other.

[1] https://pve.proxmox.com/wiki/Cluster_Manager#pvecm_cluster_network
Sorry, pasted the wrong command. The right one is:
Code:
wget --no-check-certificate -qO - https://$HOSTNAME:8006 | grep 'Proxmox Virtual Environment' 2> /dev/null


So the two nodes are in different locations at 250ms apart? That is an issue that will cause pmxcfs to fail eventually. Check requirements for cluster [1], nodes must be ~5ms max from each other.

[1] https://pve.proxmox.com/wiki/Cluster_Manager#pvecm_cluster_network
no no the node are each one stack together in the same rackmount.
i did a bad explanation my bad.

the ipv6 tunnel broker give me latency of 250ms when activated via opnsense and use into alpha and bravo to update, access download and go into my website.

we never change anything that was configured into ipv6 to ipv4 since everything was working fine if we check in opnsense use ipv4 over ipv6.
my latency at this moment is 2-4ms instead of the ipv6 250ms when the broker is activated :)

they are both in a spf++ 10G linked together at 3 inch of distance from each other.


here the command
Code:
root@bravo:~# wget --no-check-certificate -qO - https://$HOSTNAME:8006 | grep 'Proxmox Virtual Environment' 2> /dev/null
    <title>bravo - Proxmox Virtual Environment</title>
root@bravo:~# ssh root@10.0.100.10
Linux alpha 6.8.12-10-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-10 (2025-04-18T07:39Z) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Wed Jul 16 08:24:47 2025
root@alpha:~# wget --no-check-certificate -qO - https://$HOSTNAME:8006 | grep 'Proxmox Virtual Environment' 2> /dev/null
    <title>alpha - Proxmox Virtual Environment</title>
root@alpha:~#
 
If someone is telling me, it would be easier and faster to reinstall, I'll reinstall the proxmox for bravo or i need to do both, alpha and bravo?

If that seem a yes, can you tell me how to save the config of all networks and every bond and exact name and all it's require, im not scare to reinstall, i just dont understand all the opnsense mac reserved, port forward done that match rules done for mail server website, internal domain, ip reserved configured to make thing work properly for an advanced setup in opnsense and all the configs that was done related to it. there also physical hardware bond from both proxmox for the spf that im scare to not be able to recreate and i dont know where to check because the guy did it so quickly.. all the vlan i cannot loss from opnsense link to all network port, since we are many trying to create many little entreprise that will see the sun soon.. if everything can work properly before i run this in prod mode..

It really need to be the same since there computer and smartphone from neighbor that connect to internet from 2 AP that we setup via wifi. and i cannot mix stuffs.

Back in the time, it was a mess to remove a node from a cluster because of the quorum and i never understand what to really do, that the part im scare because i was running a 10 nodes datacenter but it was too much a single website hosting so i reduced to 2 and reinstall everything properly. there so many post i've seen that say different setup. what should i do to fix this. :/

Is there a way to fix this at first or im screw to reinstall bravo or alpha or both since bravo is practically unusable at the moment for me even if i have shell access, i don't have enough knowledge to debug this with simple research on the web and with chatgpt that do his best too.. hihihi . :/

i will rewrite them manually after i format or i can copied paste files from mobaxterm in ssh or save them on a usb key and copied paste and reload every container i've created so far?

seem a good plan or you can see an alternative?
 
Can't really tell what's going on. There might be that tunnel interfering with some traffic, might be firewall rules, might be some issue related to the certificate... Basic network connectivity looks fine, services are up, seems that they reach each other because that shell part seems to work... Can't really tell you try this or that.

Maybe check journald logs on both nodes when you try to access from one to another and look with tcpdump to verify that packets are really reaching each other node.
 
Can't really tell what's going on. There might be that tunnel interfering with some traffic, might be firewall rules, might be some issue related to the certificate... Basic network connectivity looks fine, services are up, seems that they reach each other because that shell part seems to work... Can't really tell you try this or that.

Maybe check journald logs on both nodes when you try to access from one to another and look with tcpdump to verify that packets are really reaching each other node.
okie thanks a lot for this