Configuring 2 VLANs across 2 interfaces

serendipitist

New Member
Aug 7, 2023
3
0
1
I upgraded to a new server which has dual 2.5gbit NICs and I'm running VE 8.3.5. My router/firewall is a Unifi Dream Machine Pro and I recently switched to Zone networking.

I'm looking to configure networking for this host as follows:
  • The two NICs are bound together for load balancing
  • Two VLANS:
    • Internal LAN (10.0.1.0/24, tag=1)for accessing the Proxmox GUI, and various internal services (DNS, Nginx Proxy Manager, other internal-only containers)
    • DMZ LAN (10.0.36.0/24, tag=36) for various containers and VMs exposed to the internet
  • Static IP for accessing the GUI of the host is 10.0.1.15
I've read up about OVS networking on the wiki, and in particular Example 2 seems pretty close to what I'm looking to achieve. All that said, if I attempt to implement that as shown (with edits for my specific interfaces and VLANs) all networking shuts down. In attempting this configuration incrementally, I have been successful in bridging the two NICs with the following /etc/network/interfaces configuration:

Code:
auto lo
iface lo inet loopback

auto enp2s0
iface enp2s0 inet manual
        ovs_mtu 9000

auto enp3s0
iface enp3s0 inet manual
        ovs_mtu 9000

auto bond0
iface bond0 inet manual
        ovs_bridge vmbr0
        ovs_type OVSBond
        ovs_bonds enp2s0 enp3s0
        ovs_options bond_mode=balance-tcp lacp=active other_config:lacp-time=fast
        ovs_mtu 9000

auto vmbr0
iface vmbr0 inet static
        ovs_type OVSBridge
        ovs_ports bond0 
        ovs_mtu 9000
        address 10.0.1.15/24
        netmask 255.255.255.0
        gateway 10.0.1.1

Note that in the above, no VLANs are configured on the host. At this point, I can set the tag value for a container to "36" and it will properly grab an IP via DHCP from the DMZ VLAN and can be access via HTTP at that IP, however, the container doesn't have outbound Internet access, so clearly something is amiss. If I try this full configuration, this is when networking fails:

Code:
auto lo
iface lo inet loopback

auto enp2s0
iface enp2s0 inet manual
        ovs_mtu 9000

auto enp3s0
iface enp3s0 inet manual
        ovs_mtu 9000

auto bond0
iface bond0 inet manual
        ovs_bridge vmbr0
        ovs_type OVSBond
        ovs_bonds enp2s0 enp3s0
        ovs_options bond_mode=balance-tcp lacp=active other_config:lacp-time=fast
        ovs_mtu 9000

auto vmbr0
iface vmbr0 inet manual
        ovs_type OVSBridge
        ovs_ports bond0 vlan1 vlan36
        ovs_mtu 9000

# Internal VLAN
auto vlan1
iface vlan1 inet static
        ovs_type OVSIntPort
        ovs_bridge vmbr0
        ovs_options tag=1
        address 10.0.1.15
        netmask 255.255.255.0
        gateway 10.0.1.1
        ovs_mtu 9000

# DMZ vlan
auto vlan36
iface vlan36 inet static
        ovs_type OVSIntPort
        ovs_bridge vmbr0
        ovs_options tag=36
        address 10.0.36.15
        netmask 255.255.255.0
        gateway 10.0.36.1
        ovs_mtu 9000

A few specific questions:
  • Is it necessary to configure the VLANs on the Proxmox host in a configuration similar to the second example in order for a container/VM to properly access that VLAN? If so, what might be c
  • Since I have two NICs, do I need to configure one of the ports on my Unifi switch to use the DMZ VLAN, or should they both be configured for the Internal LAN? (This is how it's currently set up)
At this point, I'm not sure if this is a Proxmox issue, a Unifi Zone/routing issue, or both.