Separation of networks

kcallis

Active Member
Apr 5, 2018
48
2
28
56
I am getting ready to re-install Proxmox 5.2 on my laptop again. When I am configuring the management interface, I have my Cat 5 plugged into my switch and the IP address is attached to my management VLAN (in this case 192.168.5.0/24). I have also configured the ports to be a access port as well as tagged VLANs as well, since some of my lxc and KVM images are going to be attached to other VLANs.

So how do I setup my networking so that interface vmbr0 is on my 192.168.5.0/24 network, but I am setup to have my other VLANs have access to the appropriate networks. For instance, I want to create a KVM image (example Ubuntu Server 18.04) that I want in my VLAN20_VPN (with an IP address of 192.168.20.100). What do I need to do setup so that I can point my various VLANS?

Is this the time to make use of openvswitch? Any pointer would be greatly appreciated.
 
Thanks for the response! I have read this page, but it only creates more questions.

The following is my network configuration on my router:

VLAN05_MGMT ---> 192.168.5.0/24 # Management interface
VLAN10_CLRNET ---> 192.168.10.0/24 # Local LAN
VLAN15_GUEST ---> 192.168.15.0/24 # WiFi Guest access
VLAN20_VPN ---> 192.168.20.0/24 # VPN access
VLAN25_VOIP ---> 192.168.25.0/24 # VOIP server and end-points
VLAN30_VHOSTS ---> 192.168.30.0/24 # KVM/LXC/Docker images under Proxmox

When I created my Proxmox host, I gave the host an IP address which was 192.168.5.250 which is located on my VLAN05_MGMT segment.

Code:
#/etc/network/interfaces

auto lo
iface lo inet loopback

iface enp0s25 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.5.250
        netmask 255.255.255.0
        gateway 192.168.5.1
        bridge_ports enp0s25
        bridge_stp off
        bridge_fd 0

I would like to have my images be routed to my VLAN30_VHOSTS interfaces (although at times I would like to spin an image on another VLAN, like for instance VLAN20_VPN), so what do I need to do to my /etc/network/interface file so that that when I spin up an image, it will go to the appropriate VLAN?

Looking at the section on "VLAN on the Host", so I need to set up the /etc/network/interface so that I define a mirroring of my network topology:

Code:
#/etc/network/interfaces

auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno1.5 inet manual
iface eno1.10 inet manual
iface eno1.20 inet manual
iface eno1.25 inet manual
iface eno1.30 inet manual


auto vmbr0v5
iface vmbr0v5 inet static
       address  192.168.5.250
       netmask 255.255.255.0
       gateway  192.168.5.1
       bridge_ports eno1.5
       bridge_stp off
       bridge_fd 0

auto vmbr0v10
iface vmbr0v5 inet static
       address  192.168.10.250
       netmask 255.255.255.0
       gateway  192.168.10.1
       bridge_ports eno1.10
       bridge_stp off
       bridge_fd 0

auto vmbr0v20
iface vmbr0v20 inet static
       address  192.168.20.250
       netmask 255.255.255.0
       gateway  192.168.20.1
       bridge_ports eno1.20
       bridge_stp off
       bridge_fd 0

auto vmbr0v25
iface vmbr0v25 inet static
       address  192.168.25.250
       netmask 255.255.255.0
       gateway  192.168.25.1
       bridge_ports eno1.25
       bridge_stp off
       bridge_fd 0

auto vmbr0v30
iface vmbr0v5 inet static
       address  192.168.30.250
       netmask 255.255.255.0
       gateway  192.168.30.1
       bridge_ports eno1.30
       bridge_stp off
       bridge_fd 0

auto vmbr0
iface vmbr0 inet manual
       bridge_ports eno1
       bridge_stp off
       bridge_fd 0

Using this setup, when I reboot my host, Proxmox will still be 192.168.5.250, but if I create a KVM image with an IP address of 192.168.20.100, this should not be any issues with connecting to VPN network. Or will there still be a need to mess around with iptables, etc.
 
I tried to be creative and make some changes to the example and this was no helpful. First I change the interface name from enp0s25 to eth0. I then rebuilt grub.cfg so the changed interface name would stick. At this point, I changed the original /etc/network/interface to the modified interfaces file to reflect the VLANs. After a reboot, the first issue is that the system can't start the network interfaces. Sure enough, when I log in, I am not able to ping any hosts.

I decided to remove the VLAN05_MGMT entry and just use the original static ip address (which is still connected to the VLAN05_MGMT network). When I reboot now, I am able to log in and I am able to ping, but there are now strange differences.

The default setup:

Code:
root@pve:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface enp0s25 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.5.250
        netmask 255.255.255.0
        gateway 192.168.5.1
        bridge_ports enp0s25
        bridge_stp off
        bridge_fd 0

root@pve:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether a4:5d:36:9a:00:cc brd ff:ff:ff:ff:ff:ff
3: wlo1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether a4:4e:31:b6:01:48 brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether a4:5d:36:9a:00:cc brd ff:ff:ff:ff:ff:ff
    inet 192.168.5.250/24 brd 192.168.5.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::a65d:36ff:fe9a:cc/64 scope link
       valid_lft forever preferred_lft forever

root@pve:~# ip route list
default via 192.168.5.1 dev vmbr0 onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.5.0/24 dev vmbr0 proto kernel scope link src 192.168.5.250

The default route is where I want it... 192.168.5.1/24 which my VLAN05_MGMT network. Also, (although it is not shown, I am able to get to my docker network which on my Proxmox host as well). The problem that I have is that my KVM image is not able to attach to my VLAN20_VPN network. My Proxmox host can ping and access my VLAN20_VPN with no issues:

Code:
root@pve:~# ping 192.168.20.1
PING 192.168.20.1 (192.168.20.1) 56(84) bytes of data.
64 bytes from 192.168.20.1: icmp_seq=1 ttl=64 time=0.266 ms
64 bytes from 192.168.20.1: icmp_seq=2 ttl=64 time=0.243 ms
64 bytes from 192.168.20.1: icmp_seq=3 ttl=64 time=0.247 ms
64 bytes from 192.168.20.1: icmp_seq=4 ttl=64 time=0.162 ms
64 bytes from 192.168.20.1: icmp_seq=5 ttl=64 time=0.236 ms
--- 192.168.20.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4077ms
rtt min/avg/max/mdev = 0.162/0.230/0.266/0.040 ms

As I said earlier, when I make a change to my my interface file, although I am able to ssh into my Proxmox host, there are issues with connect.

Code:
root@pve:~#cat /etc/network/interfaces
auto lo
iface lo inet loopback
iface enp0s25 inet manual
iface enp0s25.10 inet manual
iface enp0s25.20 inet manual
iface enp0s25.25 inet manual
iface enp0s25.30 inet manual


#auto vmbr0v5
#iface vmbr0v5 inet static
#       address 192.168.5.250
#       netmask 255.255.255.0
#       gateway 192.168.5.1
#       bridge_ports enp0s25.5
#       bridge_stp off
#       bridge_fd 0

auto vmbr0v10
iface vmbr0v5 inet static
       address 192.168.10.250
       netmask 255.255.255.0
       gateway 192.168.10.1
       bridge_ports enp0s25.10
       bridge_stp off
       bridge_fd 0

auto vmbr0v20
iface vmbr0v20 inet static
       address 192.168.20.250
       netmask 255.255.255.0
       gateway 192.168.20.1
       bridge_ports enp0s25.20
       bridge_stp off
       bridge_fd 0

auto vmbr0v25
iface vmbr0v25 inet static
       address 192.168.25.250
       netmask 255.255.255.0
       gateway 192.168.25.1
       bridge_ports enp0s25.25
       bridge_stp off
       bridge_fd 0

auto vmbr0v30
iface vmbr0v5 inet static
       address 192.168.30.250
       netmask 255.255.255.0
       gateway 192.168.30.1
       bridge_ports enp0s25.30
       bridge_stp off
       bridge_fd 0

auto vmbr0
iface vmbr0 inet static
        address 192.168.5.250
        netmask 255.255.255.0
        gateway 192.168.5.1
        bridge_ports enp0s25
        bridge_stp off
        bridge_fd 0

root@pve:~#  ip a
root@pve:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether a4:5d:36:9a:00:cc brd ff:ff:ff:ff:ff:ff
3: wlo1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether a4:4e:31:b6:01:48 brd ff:ff:ff:ff:ff:ff
4: vmbr0v20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether a4:5d:36:9a:00:cc brd ff:ff:ff:ff:ff:ff
    inet 192.168.20.250/24 brd 192.168.20.255 scope global vmbr0v20
       valid_lft forever preferred_lft forever
    inet6 fe80::a65d:36ff:fe9a:cc/64 scope link
       valid_lft forever preferred_lft forever
5: enp0s25.20@enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0v20 state UP group default qlen 1000
    link/ether a4:5d:36:9a:00:cc brd ff:ff:ff:ff:ff:ff
6: vmbr0v25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether a4:5d:36:9a:00:cc brd ff:ff:ff:ff:ff:ff
    inet 192.168.25.250/24 brd 192.168.25.255 scope global vmbr0v25
       valid_lft forever preferred_lft forever
    inet6 fe80::a65d:36ff:fe9a:cc/64 scope link
       valid_lft forever preferred_lft forever
7: enp0s25.25@enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0v25 state UP group default qlen 1000
    link/ether a4:5d:36:9a:00:cc brd ff:ff:ff:ff:ff:ff
8: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether a4:5d:36:9a:00:cc brd ff:ff:ff:ff:ff:ff
    inet 192.168.5.250/24 brd 192.168.5.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::a65d:36ff:fe9a:cc/64 scope link
       valid_lft forever preferred_lft forever

root@pve:~# ip route list
default via 192.168.20.1 dev vmbr0v20 onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.5.0/24 dev vmbr0 proto kernel scope link src 192.168.5.250
192.168.20.0/24 dev vmbr0v20 proto kernel scope link src 192.168.20.250
192.168.25.0/24 dev vmbr0v25 proto kernel scope link src 192.168.25.250

First off, default using the second example shows the default gateway as 192.168.20.1 as opposed to 192.168.5.1. I can not bring up the console for my KVM image because I can not get the vncproxy. Despite the KVM image having a static IP address (192.168.20.205), I am not able to ping the image. At boot time, not all of get interfaces are brought on line. So I am completely loss on the setup. Either I use the default where the host is able to ping and access all of my local networks, but none of the containers or images can get out of my host interface, or I can bring up all of the interfaces and the host can connect to the network, but I am still unable to have my containers or images reach the network.

None of the options work for me.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!