hi,
im trying to setup up a proxmox v9.0.11 3 node mesh cluster.
the nodes a directly attached.
each node has 8 nics: 2x dedicated for ceph , 2x dedicated for corosync, 1x for management, 3x unused.
each node is connected like this:
for management:
node 1 <--> switch
node 2 <--> switch
node 3 <--> switch
for corosync
node 1 <--> node 2
node 2 <--> node 3
node 3 <--> node 1
for ceph
node 1 <--> node 2
node 2 <--> node 3
node 3 <--> node 1
ipconfig
wan:
node1 management nic: 192.168.11.101 (gw. 192.168.11.1)
node2 management nic: 192.168.11.102 (gw. 192.168.11.1)
node3 management nic: 192.168.11.103 (gw. 192.168.11.1)
corosync:
node1: linux-bridge (includes corosyncNIC1 & corosyncNIC2): 192.168.22.101
node2: linux-bridge (includes corosyncNIC1 & corosyncNIC2): 192.168.22.102
node3: linux-bridge (includes corosyncNIC1 & corosyncNIC2): 192.168.22.103
case:
i create the cluster on node1 and pick the linux bridge for corosync (192.168.22.101) - this works
then i join node2 and pick the linux bridge for corosync as interface (192.168.22.102) - this works
then i join node3 by picking also linux bdige for corosync as interface (192.168.22.103) - this is when the looping starts! and one of the machines is no longer reachable
the log is spamming with:
"received packet from <corosync nic1 name> with own adress as source address" (addr: <corosync nic2 MAC>, vlan0)
"received packet from <corosync nic2 name> with own adress as source address" (addr: <corosync nic2 MAC>, vlan0)
there is no mac or ip conflict, the nodes are completely fresh.
any ideas?
yes, iv read the meshed configuration document (which is a great help btw) but it didnt really helped much regarding ip adresses and how to configure them either on the nic direcly or on a linux_bridge (or bond for that matter, but linux bridge worked better.....or so i thought)
tia
regards,
im trying to setup up a proxmox v9.0.11 3 node mesh cluster.
the nodes a directly attached.
each node has 8 nics: 2x dedicated for ceph , 2x dedicated for corosync, 1x for management, 3x unused.
each node is connected like this:
for management:
node 1 <--> switch
node 2 <--> switch
node 3 <--> switch
for corosync
node 1 <--> node 2
node 2 <--> node 3
node 3 <--> node 1
for ceph
node 1 <--> node 2
node 2 <--> node 3
node 3 <--> node 1
ipconfig
wan:
node1 management nic: 192.168.11.101 (gw. 192.168.11.1)
node2 management nic: 192.168.11.102 (gw. 192.168.11.1)
node3 management nic: 192.168.11.103 (gw. 192.168.11.1)
corosync:
node1: linux-bridge (includes corosyncNIC1 & corosyncNIC2): 192.168.22.101
node2: linux-bridge (includes corosyncNIC1 & corosyncNIC2): 192.168.22.102
node3: linux-bridge (includes corosyncNIC1 & corosyncNIC2): 192.168.22.103
case:
i create the cluster on node1 and pick the linux bridge for corosync (192.168.22.101) - this works
then i join node2 and pick the linux bridge for corosync as interface (192.168.22.102) - this works
then i join node3 by picking also linux bdige for corosync as interface (192.168.22.103) - this is when the looping starts! and one of the machines is no longer reachable
the log is spamming with:
"received packet from <corosync nic1 name> with own adress as source address" (addr: <corosync nic2 MAC>, vlan0)
"received packet from <corosync nic2 name> with own adress as source address" (addr: <corosync nic2 MAC>, vlan0)
there is no mac or ip conflict, the nodes are completely fresh.
any ideas?
yes, iv read the meshed configuration document (which is a great help btw) but it didnt really helped much regarding ip adresses and how to configure them either on the nic direcly or on a linux_bridge (or bond for that matter, but linux bridge worked better.....or so i thought)
tia
regards,