OK so I am trying to do some network wizard and failing.
I currently have a 3 node cluster with the following configuration for each node.
eth0 is public IP address
eth1 is cluster network
eth2 is Ceph network
eth 0-3 are all physical NICs attached to managed switch with separate vlans for each network.
I have ipforwarding enabled with NAT on eth0 for my container.
On node1 I have OpenVPN server setup and running and have changed the network that pveproxy listens on to the private network. This secures the Proxmox web interface from the outside world. I can currently connect to and manage my cluster through this OpenVPN connection. PVEProxy is disabled on the other 2 nodes - node 1 is the management node.
The issue I have is I want my containers to also be on the same OpenVPN subnet. Now I could do this by setting up a new OpenVPN client on each container but this is not optimal. What I would like to do is bridge the lxc container onto the OpenVPN network.
Current network setup on the master node (with OpenVPN server running):
Currently my LXC container is using vmbr1 interface and has the IP 10.10.10.2 and can access the internet via NAT & ip forwarding.
How do I bridge vmbr1 to tun0 so that 10.10.10.0/24 is also accessible from the vpn network?
I did have this working with a 3rd bridge as follows:
But this only worked when I had OpenVPN configured as the internet gateway for clients as well which I had to change because I didn't want to get huge overage bills from the data centre due to employees forgetting to disconnect from the VPN.
I am sure there is something very simple that I am missing but I can't seem to get it working as I like.
I don't mind if the containers are on 10.8.0.0/24 network rather than being on their own 10.10.10.0/24 network (although I would prefer they were on their own network just because it is tidier).
But if I try to bridge vmbr1 to tun0 I just get the following error from networking:
Although it doesn't tell me what the invalid argument is (I get the same error if I attempt to rebridge vmbr2 to tun0 now as well only for vmbr2 not vmbr1 obviously).
Any help would be very welcomed - given that OpenVPN server is running on the same node - it should be possible to bridge any other networks also on the same node to the 10.8.0.0/24 OpenVPN network right?
A bonus would be if there is someway to make the other two nodes also connect to the 10.8.0.0/24 network on node1 without having to run OpenVPN client on them both... (given the cluster runs on its own 10.0.1.0 network I think this should be possible too if I can bridge 10.8.0.0/24 to 10.0.1.0 on node1 right?? although then presumably all container traffic will go through the cluster vlan which kind of defeats the points of having a separate cluster vlan?)
Thanks in advance.[/code]
I currently have a 3 node cluster with the following configuration for each node.
eth0 is public IP address
eth1 is cluster network
eth2 is Ceph network
eth 0-3 are all physical NICs attached to managed switch with separate vlans for each network.
I have ipforwarding enabled with NAT on eth0 for my container.
On node1 I have OpenVPN server setup and running and have changed the network that pveproxy listens on to the private network. This secures the Proxmox web interface from the outside world. I can currently connect to and manage my cluster through this OpenVPN connection. PVEProxy is disabled on the other 2 nodes - node 1 is the management node.
The issue I have is I want my containers to also be on the same OpenVPN subnet. Now I could do this by setting up a new OpenVPN client on each container but this is not optimal. What I would like to do is bridge the lxc container onto the OpenVPN network.
Current network setup on the master node (with OpenVPN server running):
Code:
auto lo
iface lo inet loopback
iface eno1 inet manual
auto eno2
iface eno2 inet static
address 10.0.1.1
netmask 255.255.255.0
auto eno3
iface eno3 inet static
address 10.0.2.1
netmask 255.255.255.0
iface eno4 inet manual
auto vmbr0
iface vmbr0 inet static
address public-ip
netmask 255.255.255.240
gateway public-ip
bridge_ports eno1
bridge_stp off
bridge_fd 0
auto vmbr1
iface vmbr1 inet static
address 10.10.10.1
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o vmbr0 -j MASQUERADE
Currently my LXC container is using vmbr1 interface and has the IP 10.10.10.2 and can access the internet via NAT & ip forwarding.
How do I bridge vmbr1 to tun0 so that 10.10.10.0/24 is also accessible from the vpn network?
I did have this working with a 3rd bridge as follows:
Code:
auto vmbr2
iface vmbr2 inet static
address 10.8.0.100
netmask 255.255.255.0
bridge_ports tun0
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.8.0.0/24' -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.8.0.0/24' -o vmbr0 -j MASQUERADE
I am sure there is something very simple that I am missing but I can't seem to get it working as I like.
I don't mind if the containers are on 10.8.0.0/24 network rather than being on their own 10.10.10.0/24 network (although I would prefer they were on their own network just because it is tidier).
But if I try to bridge vmbr1 to tun0 I just get the following error from networking:
Code:
Jul 31 16:12:16 node1 systemd[1]: Starting Raise network interfaces...
Jul 31 16:12:17 node1 ifup[426126]: Waiting for vmbr0 to get ready (MAXWAIT is 2 seconds).
Jul 31 16:12:17 node1 ifup[426126]: can't add tun0 to bridge vmbr1: Invalid argument
Jul 31 16:12:17 node1 ifup[426126]: Waiting for vmbr1 to get ready (MAXWAIT is 2 seconds).
Jul 31 16:12:17 node1 ifup[426126]: Waiting for vmbr2 to get ready (MAXWAIT is 2 seconds).
Jul 31 16:12:18 node1 systemd[1]: Started Raise network interfaces.
Any help would be very welcomed - given that OpenVPN server is running on the same node - it should be possible to bridge any other networks also on the same node to the 10.8.0.0/24 OpenVPN network right?
A bonus would be if there is someway to make the other two nodes also connect to the 10.8.0.0/24 network on node1 without having to run OpenVPN client on them both... (given the cluster runs on its own 10.0.1.0 network I think this should be possible too if I can bridge 10.8.0.0/24 to 10.0.1.0 on node1 right?? although then presumably all container traffic will go through the cluster vlan which kind of defeats the points of having a separate cluster vlan?)
Thanks in advance.[/code]