Hello together,
I am currently trying to set up a 2 node cluster on hetzner servers. I am using vSwitches to create VPNs for the nodes. corosync and datasync are both working over vpn (id 4000,4001). Each node has a single network card only. When doing a performance test with iperf3 directly from the nodes, the connections are close to the 1Gbit to be expected. Unfortunately things look different when I start two VMS (1 per node) and run iperf3 between them. I get a ping between both, but seems like iperf3 traffic is blocked between them after first package. On VMs firewalls are switched off, on nodes no firewall rules are set yet. See attached image. Has anyone an idea what is going wrong here? Thanks ahead for your help.
Maybe, I should add, I also tried to roll out VxLan between the nodes. Unfortunately, this does not work either. The rollout gets stuck. I have not found a solution to it yet either. Seems like on the first node /etc/network/interfaces.d/sdn gets created, but not on the second node. I tried creating it manually, but this did not help either.
I am currently trying to set up a 2 node cluster on hetzner servers. I am using vSwitches to create VPNs for the nodes. corosync and datasync are both working over vpn (id 4000,4001). Each node has a single network card only. When doing a performance test with iperf3 directly from the nodes, the connections are close to the 1Gbit to be expected. Unfortunately things look different when I start two VMS (1 per node) and run iperf3 between them. I get a ping between both, but seems like iperf3 traffic is blocked between them after first package. On VMs firewalls are switched off, on nodes no firewall rules are set yet. See attached image. Has anyone an idea what is going wrong here? Thanks ahead for your help.
Maybe, I should add, I also tried to roll out VxLan between the nodes. Unfortunately, this does not work either. The rollout gets stuck. I have not found a solution to it yet either. Seems like on the first node /etc/network/interfaces.d/sdn gets created, but not on the second node. I tried creating it manually, but this did not help either.
Code:
auto lo
iface lo inet loopback
auto eno1
iface eno1 inet static
address 116.202.80.230/26
gateway 116.202.80.193
#physical network interface
auto eno1.4000
iface eno1.4000 inet manual
mtu 1400
#corosync interface
auto eno1.4001
iface eno1.4001 inet manual
mtu 1400
#cluster datasync interface
auto eno1.4002
iface eno1.4002 inet manual
mtu 1400
#prod1 interface
auto vmbr0
iface vmbr0 inet static
address 10.1.0.1/24
bridge-ports none
bridge-stp off
bridge-fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.1.0.0/24' -o eno1 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.1.0.0/24' -o eno1 -j MASQUERADE
post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
#masqueraded internet bridge
auto vmbr1
iface vmbr1 inet static
address 192.168.1.3/24
bridge-ports none
bridge-stp off
bridge-fd 0
#lan network
auto vmbr4000
iface vmbr4000 inet static
address 10.0.0.3/29
bridge-ports eno1.4000
bridge-stp off
bridge-fd 0
mtu 1400
#corosync bridge
auto vmbr4001
iface vmbr4001 inet static
address 10.0.1.3/29
bridge-ports eno1.4001
bridge-stp off
bridge-fd 0
mtu 1400
#cluster datasync bridge
auto vmbr4002
iface vmbr4002 inet static
address 10.0.2.3/24
bridge-ports eno1.4002
bridge-stp off
bridge-fd 0
mtu 1400
#prod1 bridge
source /etc/network/interfaces.d/*
Attachments
Last edited: