Node IP Änderung

pxo

Renowned Member
Nov 3, 2013
29
0
66
Ich habe gesamt 4 Nodes.

Node-3 muss in ein anderes Subnet. (ich benutze nur lokale Storages).
Auf einem Node habe ich nun die IP (alt: 192.168.0.9, neu: 10.8.5.10) geändert, dabei habe ich.

1. Die IP per Webif am Node-3 geändert, sieht dann so aus
Code:
root@px3 ~ > cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eth0 inet manual

auto vmbr0
iface vmbr0 inet static
        address  10.8.5.10
        netmask  255.255.255.0
        gateway  10.8.5.1
        bridge-ports eth0
        bridge-stp off
        bridge-fd 0

2. /etc/hosts angepasst:
Code:
root@px3 ~ > cat /etc/hosts
127.0.0.1       localhost
10.8.5.10       px3.mydomain.net px3

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

3. Node runtergefahren
4. VLAN am Switch angepasst
5. Node gestartet

Netzwerktechnisch ist alles OK, erreichbar.

Die Datei /etc/pve/corosync.conf habe ich NICHT angepasst. (hier steht nur die IP von Node1 darin).
Code:
root@px1 / > cat /etc/pve/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: px1
    nodeid: 1
    quorum_votes: 1
    ring0_addr: px1
  }
  node {
    name: px2
    nodeid: 2
    quorum_votes: 1
    ring0_addr: px2
  }
  node {
    name: px3
    nodeid: 3
    quorum_votes: 1
    ring0_addr: px3
  }
  node {
    name: px4
    nodeid: 4
    quorum_votes: 1
    ring0_addr: px4
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: Proxmox
  config_version: 14
  interface {
    bindnetaddr: 192.168.0.7
    ringnumber: 0
  }
  ip_version: ipv4
  secauth: on
  version: 2
}

Am Node 3 sehe ich folgendes im LOG:
Code:
Jan  4 14:04:00 px3 systemd[1]: Starting Proxmox VE replication runner...
Jan  4 14:04:00 px3 pvesr[19482]: trying to acquire cfs lock 'file-replication_cfg' ...
Jan  4 14:04:01 px3 pvesr[19482]: trying to acquire cfs lock 'file-replication_cfg' ...
Jan  4 14:04:02 px3 pvesr[19482]: trying to acquire cfs lock 'file-replication_cfg' ...
Jan  4 14:04:03 px3 pvesr[19482]: trying to acquire cfs lock 'file-replication_cfg' ...
Jan  4 14:04:04 px3 pvesr[19482]: trying to acquire cfs lock 'file-replication_cfg' ...
Jan  4 14:04:05 px3 pvesr[19482]: trying to acquire cfs lock 'file-replication_cfg' ...
Jan  4 14:04:06 px3 pvesr[19482]: trying to acquire cfs lock 'file-replication_cfg' ...
Jan  4 14:04:07 px3 pvesr[19482]: trying to acquire cfs lock 'file-replication_cfg' ...
Jan  4 14:04:08 px3 pvesr[19482]: trying to acquire cfs lock 'file-replication_cfg' ...
Jan  4 14:04:09 px3 pvesr[19482]: error with cfs lock 'file-replication_cfg': no quorum!
Jan  4 14:04:09 px3 systemd[1]: pvesr.service: Main process exited, code=exited, status=13/n/a
Jan  4 14:04:09 px3 systemd[1]: Failed to start Proxmox VE replication runner.
Jan  4 14:04:09 px3 systemd[1]: pvesr.service: Unit entered failed state.
Jan  4 14:04:09 px3 systemd[1]: pvesr.service: Failed with result 'exit-code'.

Kann ich noch andere hilfreiche Logs liefern ?
Danke für Tips.
 
Code:
root@px1 / > pvecm status
Quorum information
------------------
Date:             Fri Jan  4 14:33:27 2019
Quorum provider:  corosync_votequorum
Nodes:            3
Node ID:          0x00000001
Ring ID:          1/18108
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   4
Highest expected: 4
Total votes:      3
Quorum:           3
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 192.168.0.7 (local)
0x00000002          1 192.168.0.8
0x00000004          1 192.168.0.10

Code:
root@px3 /etc/init.d > pvecm status
Quorum information
------------------
Date:             Fri Jan  4 14:33:44 2019
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000003
Ring ID:          3/20
Quorate:          No

Votequorum information
----------------------
Expected votes:   4
Highest expected: 4
Total votes:      1
Quorum:           3 Activity blocked
Flags:

Membership information
----------------------
    Nodeid      Votes Name
0x00000003          1 10.8.5.10 (local)
 
Hm - ich glaube, das wird mit unserem Clusterstack nicht gehen - einen corosync ring haben, bei dem die nodes in unterschiedlichen IP-Netzen sind....:

* die bindnetaddr in der totem section in corosync.conf sagt corosync auf welches interface es lauschen soll - und px3 hat keine ip in 192.168.0.0/24
* PVE kopiert die /etc/corosync/corosync.conf immer aus /etc/pve/corosync.conf, damit sie clusterweit ident ist -sprich es geht auch nicht sie lokal zu modifizieren

* Was loggt corosync ? (journalctl -u corosync.service)
* Läuft corosync überhaupt?

Es wird aber ohnehin sehr empfohlen das corosync netz auf ein eigenes interface zu legen - das wäre die sauberste Lösung:
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_cluster_network
 
Hallo vielen Dank für die Hilfe.
Bei 2 Nodes fehlte in /etc/hosts der "pvelocalhost", habe ich nun korrigiert.

Ich habe nun px1 und px3 neugestartet.
Auszug aus dem Journal nach "reboot" von px1 und px3:

px1:
Code:
Jan 04 15:01:38 px1 systemd[1]: Starting Corosync Cluster Engine...
Jan 04 15:01:38 px1 corosync[1480]:  [MAIN  ] Corosync Cluster Engine ('2.4.4-dirty'): started and ready to provide service.
Jan 04 15:01:38 px1 corosync[1480]: notice  [MAIN  ] Corosync Cluster Engine ('2.4.4-dirty'): started and ready to provide service.
Jan 04 15:01:38 px1 corosync[1480]:  [MAIN  ] Corosync built-in features: dbus rdma monitoring watchdog systemd xmlconf qdevices qnetd snmp pie relro bindnow
Jan 04 15:01:38 px1 corosync[1480]: info    [MAIN  ] Corosync built-in features: dbus rdma monitoring watchdog systemd xmlconf qdevices qnetd snmp pie relro bindnow
Jan 04 15:01:43 px1 corosync[1480]:  [MAIN  ] interface section bindnetaddr is used together with nodelist. Nodelist one is going to be used.
Jan 04 15:01:43 px1 corosync[1480]: warning [MAIN  ] interface section bindnetaddr is used together with nodelist. Nodelist one is going to be used.
Jan 04 15:01:43 px1 corosync[1480]: warning [MAIN  ] Please migrate config file to nodelist.
Jan 04 15:01:43 px1 corosync[1480]:  [MAIN  ] Please migrate config file to nodelist.
Jan 04 15:01:43 px1 corosync[1480]: notice  [TOTEM ] Initializing transport (UDP/IP Multicast).
Jan 04 15:01:43 px1 corosync[1480]: notice  [TOTEM ] Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Jan 04 15:01:43 px1 corosync[1480]:  [TOTEM ] Initializing transport (UDP/IP Multicast).
Jan 04 15:01:43 px1 corosync[1480]:  [TOTEM ] Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Jan 04 15:01:43 px1 corosync[1480]: notice  [TOTEM ] The network interface [192.168.0.7] is now up.
Jan 04 15:01:43 px1 corosync[1480]:  [TOTEM ] The network interface [192.168.0.7] is now up.
Jan 04 15:01:43 px1 corosync[1480]: notice  [SERV  ] Service engine loaded: corosync configuration map access [0]
Jan 04 15:01:43 px1 corosync[1480]: info    [QB    ] server name: cmap
Jan 04 15:01:43 px1 corosync[1480]: notice  [SERV  ] Service engine loaded: corosync configuration service [1]
Jan 04 15:01:43 px1 corosync[1480]: info    [QB    ] server name: cfg
Jan 04 15:01:43 px1 corosync[1480]: notice  [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
Jan 04 15:01:43 px1 corosync[1480]: info    [QB    ] server name: cpg
Jan 04 15:01:43 px1 corosync[1480]: notice  [SERV  ] Service engine loaded: corosync profile loading service [4]
Jan 04 15:01:43 px1 corosync[1480]:  [SERV  ] Service engine loaded: corosync configuration map access [0]
Jan 04 15:01:43 px1 corosync[1480]: notice  [SERV  ] Service engine loaded: corosync resource monitoring service [6]
Jan 04 15:01:43 px1 corosync[1480]: warning [WD    ] Watchdog not enabled by configuration
Jan 04 15:01:43 px1 corosync[1480]: warning [WD    ] resource load_15min missing a recovery key.
Jan 04 15:01:43 px1 corosync[1480]: warning [WD    ] resource memory_used missing a recovery key.
Jan 04 15:01:43 px1 corosync[1480]: info    [WD    ] no resources configured.
Jan 04 15:01:43 px1 corosync[1480]: notice  [SERV  ] Service engine loaded: corosync watchdog service [7]
Jan 04 15:01:43 px1 corosync[1480]: notice  [QUORUM] Using quorum provider corosync_votequorum
Jan 04 15:01:43 px1 corosync[1480]: notice  [SERV  ] Service engine loaded: corosync vote quorum service v1.0 [5]
Jan 04 15:01:43 px1 corosync[1480]: info    [QB    ] server name: votequorum
Jan 04 15:01:43 px1 corosync[1480]: notice  [SERV  ] Service engine loaded: corosync cluster quorum service v0.1 [3]
Jan 04 15:01:43 px1 corosync[1480]: info    [QB    ] server name: quorum
Jan 04 15:01:43 px1 corosync[1480]:  [QB    ] server name: cmap
Jan 04 15:01:43 px1 systemd[1]: Started Corosync Cluster Engine.
Jan 04 15:01:43 px1 corosync[1480]: notice  [TOTEM ] A new membership (192.168.0.7:18120) was formed. Members joined: 1
Jan 04 15:01:43 px1 corosync[1480]: warning [TOTEM ] Discarding JOIN message during flush, nodeid=2
Jan 04 15:01:43 px1 corosync[1480]: warning [TOTEM ] Discarding JOIN message during flush, nodeid=4
Jan 04 15:01:43 px1 corosync[1480]: warning [CPG   ] downlist left_list: 0 received
Jan 04 15:01:43 px1 corosync[1480]: notice  [QUORUM] Members[1]: 1
Jan 04 15:01:43 px1 corosync[1480]: notice  [MAIN  ] Completed service synchronization, ready to provide service.
Jan 04 15:01:43 px1 corosync[1480]:  [SERV  ] Service engine loaded: corosync configuration service [1]
Jan 04 15:01:43 px1 corosync[1480]:  [QB    ] server name: cfg
Jan 04 15:01:43 px1 corosync[1480]:  [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
Jan 04 15:01:43 px1 corosync[1480]:  [QB    ] server name: cpg
Jan 04 15:01:43 px1 corosync[1480]:  [SERV  ] Service engine loaded: corosync profile loading service [4]
Jan 04 15:01:43 px1 corosync[1480]:  [SERV  ] Service engine loaded: corosync resource monitoring service [6]
Jan 04 15:01:43 px1 corosync[1480]:  [WD    ] Watchdog not enabled by configuration
Jan 04 15:01:43 px1 corosync[1480]:  [WD    ] resource load_15min missing a recovery key.
Jan 04 15:01:43 px1 corosync[1480]:  [WD    ] resource memory_used missing a recovery key.
Jan 04 15:01:43 px1 corosync[1480]:  [WD    ] no resources configured.
Jan 04 15:01:43 px1 corosync[1480]:  [SERV  ] Service engine loaded: corosync watchdog service [7]
Jan 04 15:01:43 px1 corosync[1480]:  [QUORUM] Using quorum provider corosync_votequorum
Jan 04 15:01:43 px1 corosync[1480]:  [SERV  ] Service engine loaded: corosync vote quorum service v1.0 [5]
Jan 04 15:01:43 px1 corosync[1480]:  [QB    ] server name: votequorum
Jan 04 15:01:43 px1 corosync[1480]:  [SERV  ] Service engine loaded: corosync cluster quorum service v0.1 [3]
Jan 04 15:01:43 px1 corosync[1480]:  [QB    ] server name: quorum
Jan 04 15:01:43 px1 corosync[1480]:  [TOTEM ] A new membership (192.168.0.7:18120) was formed. Members joined: 1
Jan 04 15:01:43 px1 corosync[1480]:  [TOTEM ] Discarding JOIN message during flush, nodeid=2
Jan 04 15:01:43 px1 corosync[1480]:  [TOTEM ] Discarding JOIN message during flush, nodeid=4
Jan 04 15:01:43 px1 corosync[1480]:  [CPG   ] downlist left_list: 0 received
Jan 04 15:01:43 px1 corosync[1480]:  [QUORUM] Members[1]: 1
Jan 04 15:01:43 px1 corosync[1480]:  [MAIN  ] Completed service synchronization, ready to provide service.
Jan 04 15:01:43 px1 corosync[1480]: notice  [TOTEM ] A new membership (192.168.0.7:18124) was formed. Members joined: 2 4
Jan 04 15:01:43 px1 corosync[1480]:  [TOTEM ] A new membership (192.168.0.7:18124) was formed. Members joined: 2 4
Jan 04 15:01:43 px1 corosync[1480]: warning [CPG   ] downlist left_list: 0 received
Jan 04 15:01:43 px1 corosync[1480]:  [CPG   ] downlist left_list: 0 received
Jan 04 15:01:43 px1 corosync[1480]: warning [CPG   ] downlist left_list: 0 received
Jan 04 15:01:43 px1 corosync[1480]:  [CPG   ] downlist left_list: 0 received
Jan 04 15:01:43 px1 corosync[1480]: warning [CPG   ] downlist left_list: 0 received
Jan 04 15:01:43 px1 corosync[1480]:  [CPG   ] downlist left_list: 0 received
Jan 04 15:01:43 px1 corosync[1480]: notice  [QUORUM] This node is within the primary component and will provide service.
Jan 04 15:01:43 px1 corosync[1480]: notice  [QUORUM] Members[3]: 1 2 4
Jan 04 15:01:43 px1 corosync[1480]: notice  [MAIN  ] Completed service synchronization, ready to provide service.
Jan 04 15:01:43 px1 corosync[1480]:  [QUORUM] This node is within the primary component and will provide service.
Jan 04 15:01:43 px1 corosync[1480]:  [QUORUM] Members[3]: 1 2 4
Jan 04 15:01:43 px1 corosync[1480]:  [MAIN  ] Completed service synchronization, ready to provide service.

px3:
Code:
-- Reboot --
Jan 04 15:03:33 px3 systemd[1]: Starting Corosync Cluster Engine...
Jan 04 15:03:33 px3 corosync[1193]:  [MAIN  ] Corosync Cluster Engine ('2.4.4-dirty'): started and ready to provide service.
Jan 04 15:03:33 px3 corosync[1193]: notice  [MAIN  ] Corosync Cluster Engine ('2.4.4-dirty'): started and ready to provide service.
Jan 04 15:03:33 px3 corosync[1193]:  [MAIN  ] Corosync built-in features: dbus rdma monitoring watchdog systemd xmlconf qdevices qnetd snmp pie relro bindnow
Jan 04 15:03:33 px3 corosync[1193]: info    [MAIN  ] Corosync built-in features: dbus rdma monitoring watchdog systemd xmlconf qdevices qnetd snmp pie relro bindnow
Jan 04 15:03:34 px3 corosync[1193]:  [MAIN  ] interface section bindnetaddr is used together with nodelist. Nodelist one is going to be used.
Jan 04 15:03:34 px3 corosync[1193]: warning [MAIN  ] interface section bindnetaddr is used together with nodelist. Nodelist one is going to be used.
Jan 04 15:03:34 px3 corosync[1193]: warning [MAIN  ] Please migrate config file to nodelist.
Jan 04 15:03:34 px3 corosync[1193]:  [MAIN  ] Please migrate config file to nodelist.
Jan 04 15:03:34 px3 corosync[1193]: notice  [TOTEM ] Initializing transport (UDP/IP Multicast).
Jan 04 15:03:34 px3 corosync[1193]: notice  [TOTEM ] Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Jan 04 15:03:34 px3 corosync[1193]:  [TOTEM ] Initializing transport (UDP/IP Multicast).
Jan 04 15:03:34 px3 corosync[1193]:  [TOTEM ] Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Jan 04 15:03:34 px3 corosync[1193]: notice  [TOTEM ] The network interface [10.8.5.10] is now up.
Jan 04 15:03:34 px3 corosync[1193]: notice  [SERV  ] Service engine loaded: corosync configuration map access [0]
Jan 04 15:03:34 px3 corosync[1193]: info    [QB    ] server name: cmap
Jan 04 15:03:34 px3 corosync[1193]: notice  [SERV  ] Service engine loaded: corosync configuration service [1]
Jan 04 15:03:34 px3 corosync[1193]: info    [QB    ] server name: cfg
Jan 04 15:03:34 px3 corosync[1193]:  [TOTEM ] The network interface [10.8.5.10] is now up.
Jan 04 15:03:34 px3 corosync[1193]: notice  [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
Jan 04 15:03:34 px3 corosync[1193]: info    [QB    ] server name: cpg
Jan 04 15:03:34 px3 corosync[1193]: notice  [SERV  ] Service engine loaded: corosync profile loading service [4]
Jan 04 15:03:34 px3 corosync[1193]: notice  [SERV  ] Service engine loaded: corosync resource monitoring service [6]
Jan 04 15:03:34 px3 corosync[1193]: warning [WD    ] Watchdog not enabled by configuration
Jan 04 15:03:34 px3 corosync[1193]: warning [WD    ] resource load_15min missing a recovery key.
Jan 04 15:03:34 px3 corosync[1193]: warning [WD    ] resource memory_used missing a recovery key.
Jan 04 15:03:34 px3 corosync[1193]: info    [WD    ] no resources configured.
Jan 04 15:03:34 px3 corosync[1193]: notice  [SERV  ] Service engine loaded: corosync watchdog service [7]
Jan 04 15:03:34 px3 corosync[1193]: notice  [QUORUM] Using quorum provider corosync_votequorum
Jan 04 15:03:34 px3 corosync[1193]: notice  [SERV  ] Service engine loaded: corosync vote quorum service v1.0 [5]
Jan 04 15:03:34 px3 corosync[1193]: info    [QB    ] server name: votequorum
Jan 04 15:03:34 px3 corosync[1193]: notice  [SERV  ] Service engine loaded: corosync cluster quorum service v0.1 [3]
Jan 04 15:03:34 px3 corosync[1193]: info    [QB    ] server name: quorum
Jan 04 15:03:34 px3 corosync[1193]:  [SERV  ] Service engine loaded: corosync configuration map access [0]
Jan 04 15:03:34 px3 corosync[1193]: notice  [TOTEM ] A new membership (10.8.5.10:28) was formed. Members joined: 3
Jan 04 15:03:34 px3 corosync[1193]: warning [CPG   ] downlist left_list: 0 received
Jan 04 15:03:34 px3 corosync[1193]: notice  [QUORUM] Members[1]: 3
Jan 04 15:03:34 px3 corosync[1193]: notice  [MAIN  ] Completed service synchronization, ready to provide service.
Jan 04 15:03:34 px3 systemd[1]: Started Corosync Cluster Engine.
Jan 04 15:03:34 px3 corosync[1193]:  [QB    ] server name: cmap
Jan 04 15:03:34 px3 corosync[1193]:  [SERV  ] Service engine loaded: corosync configuration service [1]
Jan 04 15:03:34 px3 corosync[1193]:  [QB    ] server name: cfg
Jan 04 15:03:34 px3 corosync[1193]:  [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
Jan 04 15:03:34 px3 corosync[1193]:  [QB    ] server name: cpg
Jan 04 15:03:34 px3 corosync[1193]:  [SERV  ] Service engine loaded: corosync profile loading service [4]
Jan 04 15:03:34 px3 corosync[1193]:  [SERV  ] Service engine loaded: corosync resource monitoring service [6]
Jan 04 15:03:34 px3 corosync[1193]:  [WD    ] Watchdog not enabled by configuration
Jan 04 15:03:34 px3 corosync[1193]:  [WD    ] resource load_15min missing a recovery key.
Jan 04 15:03:34 px3 corosync[1193]:  [WD    ] resource memory_used missing a recovery key.
Jan 04 15:03:34 px3 corosync[1193]:  [WD    ] no resources configured.
Jan 04 15:03:34 px3 corosync[1193]:  [SERV  ] Service engine loaded: corosync watchdog service [7]
Jan 04 15:03:34 px3 corosync[1193]:  [QUORUM] Using quorum provider corosync_votequorum
Jan 04 15:03:34 px3 corosync[1193]:  [SERV  ] Service engine loaded: corosync vote quorum service v1.0 [5]
Jan 04 15:03:34 px3 corosync[1193]:  [QB    ] server name: votequorum
Jan 04 15:03:34 px3 corosync[1193]:  [SERV  ] Service engine loaded: corosync cluster quorum service v0.1 [3]
Jan 04 15:03:34 px3 corosync[1193]:  [QB    ] server name: quorum
Jan 04 15:03:34 px3 corosync[1193]:  [TOTEM ] A new membership (10.8.5.10:28) was formed. Members joined: 3
Jan 04 15:03:34 px3 corosync[1193]:  [CPG   ] downlist left_list: 0 received
Jan 04 15:03:34 px3 corosync[1193]:  [QUORUM] Members[1]: 3
Jan 04 15:03:34 px3 corosync[1193]:  [MAIN  ] Completed service synchronization, ready to provide service.

Ich plane im späteren Netzwerkumbau ein separates Server-Subnet.
Falls das so nicht klappt müsste ich px3 wieder in das alte Netzwerk verschieben..
 
Hm - zumindest startet corosync auf px3 mal!

Dann wuerde ich mal nachschauen ob die multicast-pakete überhaupt über den router kommen - https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_cluster_network
die dort beschriebenen `omping` commandos auf allen cluster-servern gleichzeitig ausführen - und bitte den output posten.
soweit ich das kenne muss beim Switch/Router einiges konfiguriert werden, damit multicast über ein Netz hinaus geroutet wird.
 
Vielen Dank, das ist es:

Code:
omping -c 10000 -i 0.001 -F -q px1 px3
px3 : waiting for response msg
px3 : joined (S,G) = (*, 232.43.211.234), pinging
px3 : given amount of query messages was sent

px3 :   unicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.166/0.275/1.085/0.030
px3 : multicast, xmt/rcv/%loss = 10000/0/100%, min/avg/max/std-dev = 0.000/0.000/0.000/0.000

Dazwischen sind 3 managed TP-Link Switche. Auf diesen habe ich Multicast nun aktiviert und sehe auch die Pakete.
Ich denke es liegt nun noch an der pfsense dazwischen, hier habe ich Rules mit ipv4* und Protocoll any in beide Richtungen.
Scheint jedoch noch zuwenig zu sein, mir ist hier nicht klar ob ich hier noch avahi oder igmp-proxy benötige.

Ich denke ich werde es auf andere Art lösen.
Zurück in das alte Netz mit dem Node-3, dann werde ich diesen Node aus dem Cluster entfernen und als Standalone betreiben.
Dürfte die bessere Lösung sein.
 
Klingt auf jeden Fall sauberer.
Einen Corosync-ring über mehrere Netze hätte ich noch nicht gesehen, und ist kein sonderlich weit verbreitetes (oder getestetes) Setup.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!