VLAN's across hosts in a cluster

jokeruk

Active Member
Oct 2, 2019
4
0
41
47
Hi All,

I've been banging my head against a brick wall with this one for a while.

I have a homelab with a 2 server cluster. My router (OPNSense) is running on the cluster and is configured to route:

192.168.1.0/24 (no vlan configured) on vmbr0
192.168.10.0/24 (vlan10) on vmbr1
WAN on vmbr3

Now, if I have a server on vlan 10 that's on the same physical server as the router vm, then all is good and I can communicate with it. The issue is when the vlan10 server is on the other host - nothing I seem to do get comms going between the server and the router.

Server 1 interfaces - running opnsense - Any vlan 10 traffic on this server gets routed without issue
Code:
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual

auto eno2
iface eno2 inet manual

iface eno3 inet manual

iface eno4 inet manual

auto ens2f0
iface ens2f0 inet manual

auto ens2f1
iface ens2f1 inet static
        address 10.0.1.52/24
#Cluster traffic - Migration

auto bond0
iface bond0 inet manual
        bond-slaves eno1 eno2
        bond-miimon 100
        bond-mode 802.3ad

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.52/24
        gateway 192.168.1.1
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#Untagged traffic

auto vmbr1
iface vmbr1 inet manual
        bridge-ports eno3
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#VLAN traffic

auto vmbr2
iface vmbr2 inet manual
        bridge-ports none
        bridge-stp off
        bridge-fd 0
#Unused

auto vmbr3
iface vmbr3 inet manual
        bridge-ports eno4
        bridge-stp off
        bridge-fd 0
        mtu 1484
#WAN

Server 1 interfaces
Code:
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual

auto eno2
iface eno2 inet manual

auto eno3
iface eno3 inet manual

iface eno4 inet manual

iface ens1 inet manual

iface ens2f0 inet manual

auto ens2f1
iface ens2f1 inet static
        address 10.0.1.25/24
#Cluster traffic - Migration

auto bond0
iface bond0 inet manual
        bond-slaves eno1 eno2
        bond-miimon 100
        bond-mode 802.3ad

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.25/24
        gateway 192.168.1.1
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#Untagged traffic

auto vmbr1
iface vmbr1 inet manual
        bridge-ports eno3
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#VLAN traffic

I can migrate a VM that's running on 'server 1' to 'server 2'. If I'm running a ping to the IP address, then the instant it's brought up on server 2, the pings timeout.

Switch is a Cisco 3850 and all relevant ports are set as a trunk (and the bonds are working fine too).

Anyone any thoughts?
 
Switch config is as follows
Code:
interface Port-channel1
 switchport mode trunk
!
interface Port-channel2
 switchport mode trunk
!
interface GigabitEthernet1/0/1
 description "Server1 eno1/2 Bond 0"
 switchport mode trunk
 channel-group 1 mode active
!
interface GigabitEthernet1/0/2
 description "Server1 eno1/2 Bond 0"
 switchport mode trunk
 channel-group 1 mode active
!
interface GigabitEthernet1/0/3
 description "Server1 eno3"
 switchport mode trunk
!
interface GigabitEthernet1/0/5
 description "Server2 eno1/2 Bond 0"
 switchport mode trunk
 channel-group 2 mode active
!
interface GigabitEthernet1/0/6
 description "Server2 eno1/2 Bond 0"
 switchport mode trunk
 channel-group 2 mode active
!
interface GigabitEthernet1/0/7
 description "Server2 eno3"
 switchport mode trunk
!
interface GigabitEthernet1/0/8
 description "Server2 eno4 UNUSED"
 switchport mode trunk
 
there are 2 ways todo this We call it Option 1 and 2 (O1, O2)

either you want vmbr1 pass VLAN10 as native traffic to the vm. in that case you need todo nothing in the vm except set the interface to vmbr1.
with again 2 ways todo that.

o1#1 do it on the switch and simply pass VLAN10 as native on that port for eno3

o1#2 keep eno3 as trunk port and add VLAN10 it in interfaces like

Code:
auto eno3.10
iface eno3.10 inet manual
        mtu 1500
       
auto vmbr1
iface vmbr1 inet static
        address 10.0.10.1/24
        bridge-ports eno3.10
        bridge-stp off
        bridge-fd 0
        pre-up                      brctl addbr vmbr0
        post-down                   brctl delbr vmbr0
        mtu 1500

this would then make vmbr1 native to vlan 10 to all vms.


option 2 just run it as a trunkport on vmbr1 as you do currently. here again 2 choices

o2.#1
config that vlan within the VM itself. here all traffic reaches as tagged, you need to config your networkinterface INSIDE the vm for that

o2#2
set VLAN TAG in proxmox-vm config of the network interface itself to use VLAN10 as native port.
- so VM / Hardware / network device / VLan Tag
in that case the VM again sees that network interface as native but its on VLAN 10 just like in O1#1 and O1#2



Edit: IN all Cases MAKE SHURE that VLAN10 is part of the trunk to that specific ports on the switch. And make shure ENO3 is connected to the ports you think it is
 
Last edited:
Hi all,

Thanks for your responses.

@bofh - my plan is to introduce more VLANS to the system, so Option2#2 is my method and how it's set up.

@andrew-transparent - I did this, and there's obviously something fundamental going on as the native ping isn't getting replies.

Further investigation needed.
 
Hi all,

Thanks for your responses.

@bofh - my plan is to introduce more VLANS to the system, so Option2#2 is my method and how it's set up.

@andrew-transparent - I did this, and there's obviously something fundamental going on as the native ping isn't getting replies.

Further investigation needed.
is VLAN 10 enabled on the switch side ?
did you test if O2#1 works ? if you test that then you narrow it down to VM level, if that doenst work than its probably switch config