I recently built a three-node network with mellanox connectx-3 cards to serve as the cluster network and use another 1gbe card as the LAN and management network. I've tracked down the issue (I think) to an issue with the routes. Whenever i create a new CT or launch a new container (currently testing with only one CT running on all three nodes), the veth0 route gets a default route that blocks traffic to the LAN. Traffic from the container appears to function properly as I can reach the internet from the container but I cannot get access from the host and container at the same time.
An example is here:
with the interfaces defined as follows:
I've tried using metrics to avoid this issue but with no luck. A
followed by
will provide network on the lan and cluster, but the moment a veth is created, the routes go back to preventing access to the LAN.
Any help would be greatly appreciated.
An example is here:
Code:
0.0.0.0 dev veth100i0 scope link
0.0.0.0 dev fwln100i0 scope link
0.0.0.0 dev fwpr100p0 scope link
0.0.0.0 dev enp0s31f6 scope link
0.0.0.0 dev bond0 scope link
default dev veth100i0 scope link
default dev fwln100i0 scope link
default dev fwpr100p0 scope link
default dev enp0s31f6 scope link
default via 192.168.1.1 dev vmbr0 proto kernel onlink
10.15.15.0/24 dev bond0 proto kernel scope link src 10.15.15.50
169.254.0.0/16 dev enp0s31f6 proto kernel scope link src 169.254.227.93
169.254.0.0/16 dev bond0 proto kernel scope link src 169.254.108.23
169.254.0.0/16 dev fwpr100p0 proto kernel scope link src 169.254.172.218
169.254.0.0/16 dev fwln100i0 proto kernel scope link src 169.254.140.160
169.254.0.0/16 dev veth100i0 proto kernel scope link src 169.254.190.153
192.168.1.0/24 dev vmbr0 proto kernel scope link src 192.168.1.100
with the interfaces defined as follows:
Code:
auto lo
iface lo inet loopback
iface enp0s31f6 inet manual
auto enp1s0
iface enp1s0 inet manual
mtu 9000
auto enp1s0d1
iface enp1s0d1 inet manual
mtu 9000
iface bond0 inet static
address 10.15.15.50/24
bond-slaves enp1s0 enp1s0d1
bond-miimon 100
bond-mode broadcast
mtu 9000
metric 200
auto vmbr0
iface vmbr0 inet static
address 192.168.1.100/24
gateway 192.168.1.1
bridge-ports enp0s31f6
bridge-stp off
bridge-fd 0
metric 0
source /etc/network/interfaces.d/*
I've tried using metrics to avoid this issue but with no luck. A
Code:
systemctl restart networking
Code:
ifup bond0
Any help would be greatly appreciated.