Hi All
I've been reading through threads and googling and cannot seem to get this configuration right so was hoping someone could point me to what I am missing.
I have 2 host machines on an internal network, and each host has 2 NIC's installed
I have configured the Hosts as follows:
Host 1:
Host 2:
The cluster uses the 10..10.1.1 and 10.10.1.2 IP's on eno2 to corosync, this is all working perfectly.
With the default configuration above, I was initially not able to reach any other IP on the internal network, from either Host machine, until a static route was added to the host, for example:
Without this static route, I was still able to reach the host from my laptop, which is on the 192.168.1.0/24 network (windows laptop on IP 192.168.1.9).
Now, I have a number of VM's on each host. Each VM is configured with it's own Static IP (mostly Ubuntu VM's using netplan) within the same subnet.
eg: 192.168.1.127/24
All Machines are configured to use the same gateway, 192.168.1.1, which is a physical router
ALL VM's that are hosted on a single Host can communnicate with each other, for example:
- VM101 and VM102 on HOST 1 can ping each others IP's and it's own Host IP
- VM103 and VM104 on HOST 2 can ping each other's IP's and it's own Host IP
From my laptop, I can reach ALL VM's on both hosts and both Hosts themselves.
However, I cannot get VM's on Host 1 to Reach VM's on Host 2 and vice versa, and VM's on Host 2 cannot ping Host 1 and VMs on Host 1 cannot ping Host , for example:
- VM101 ping VM103 returns "Destination Host Unreachable"
- VM101 ping HOST 2 returns "Destination Host Unreachable"
etc..
This is despite the netplan config within each VM specifying a default route via the gateway:
I have read all kinds of threads, but most are configuring the VM's on a Different Subnet to the Host Network..
My use case is for an internal development network, running Kubernetes and other 3rd party services (DB servers, Message Queing etc etc.) and I am aiming to have all machines on the internal network have unrestricted access to each other.
I would really appreciate any guidance that can be provided as I've tried so many different options now (moving gateway to eno1, moving gateway and IP to eno1, setting up second "bridge" with alternate IP etc etc.) and cannot get the VM's to communicate outside of their host.
Now, I could go and setup a static route on each VM to route to each of the IP's on the other networks via the defaulkt gateway as follows:
*using non persistent examples for now but aware would need to persist these...
However, this complicates automated container deployments via build pipelines and before I go down the road of having to manage all the static routes on each VM / Container I would really like to try solve this issue so that ANY VM on ANY HOST, all on the same /24 subnet can all see each other and communicate.
I've been reading through threads and googling and cannot seem to get this configuration right so was hoping someone could point me to what I am missing.
I have 2 host machines on an internal network, and each host has 2 NIC's installed
I have configured the Hosts as follows:
Host 1:
Code:
auto lo
iface lo inet loopback
iface eno1 inet manual
auto eno2
iface eno2 inet static
address 10.10.1.1/24
iface enp5s0f0 inet manual
iface enp5s0f1 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.1.190/24
gateway 192.168.1.1
bridge-ports eno1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
Host 2:
Code:
auto lo
iface lo inet loopback
iface eno1 inet manual
auto eno2
iface eno2 inet static
address 10.10.1.2/24
iface enp5s0f0 inet manual
iface enp5s0f1 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.1.191/24
gateway 192.168.1.1
bridge-ports eno1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
The cluster uses the 10..10.1.1 and 10.10.1.2 IP's on eno2 to corosync, this is all working perfectly.
With the default configuration above, I was initially not able to reach any other IP on the internal network, from either Host machine, until a static route was added to the host, for example:
Code:
ip route add 192.168.1.191/32 via 192.168.1.1 dev vmbr0
Without this static route, I was still able to reach the host from my laptop, which is on the 192.168.1.0/24 network (windows laptop on IP 192.168.1.9).
Now, I have a number of VM's on each host. Each VM is configured with it's own Static IP (mostly Ubuntu VM's using netplan) within the same subnet.
eg: 192.168.1.127/24
All Machines are configured to use the same gateway, 192.168.1.1, which is a physical router
ALL VM's that are hosted on a single Host can communnicate with each other, for example:
- VM101 and VM102 on HOST 1 can ping each others IP's and it's own Host IP
- VM103 and VM104 on HOST 2 can ping each other's IP's and it's own Host IP
From my laptop, I can reach ALL VM's on both hosts and both Hosts themselves.
However, I cannot get VM's on Host 1 to Reach VM's on Host 2 and vice versa, and VM's on Host 2 cannot ping Host 1 and VMs on Host 1 cannot ping Host , for example:
- VM101 ping VM103 returns "Destination Host Unreachable"
- VM101 ping HOST 2 returns "Destination Host Unreachable"
etc..
This is despite the netplan config within each VM specifying a default route via the gateway:
Code:
routes:
- to: default
via: 192.168.1.1
I have read all kinds of threads, but most are configuring the VM's on a Different Subnet to the Host Network..
My use case is for an internal development network, running Kubernetes and other 3rd party services (DB servers, Message Queing etc etc.) and I am aiming to have all machines on the internal network have unrestricted access to each other.
I would really appreciate any guidance that can be provided as I've tried so many different options now (moving gateway to eno1, moving gateway and IP to eno1, setting up second "bridge" with alternate IP etc etc.) and cannot get the VM's to communicate outside of their host.
Now, I could go and setup a static route on each VM to route to each of the IP's on the other networks via the defaulkt gateway as follows:
Code:
ip route add 192.168.1.101/32 via 192.168.1.1 dev ens118
However, this complicates automated container deployments via build pipelines and before I go down the road of having to manage all the static routes on each VM / Container I would really like to try solve this issue so that ANY VM on ANY HOST, all on the same /24 subnet can all see each other and communicate.
Last edited: