VXLAN & *sense with a cluster

arrozmio

New Member
May 2, 2024
4
0
1
Hi all,

I hope you are doing well. I’ve been researching for an answer or best practices but couldn’t really find a good solution or something straightforward. I have a 4 node cluster made up of small/tiny desktop PCs with each having a single NIC. Each PC connects directly to my home router and live on the 192.168.1.XXX network. I have them set up as a cluster. I’d like to set up an isolated network in my cluster with an IP range of 192.168.100.XXX using a Pfsense router. My home router is a basic ATT fiber router so not much in terms of IO/flexibility. I read that I can create a VXLAN network under Datacenter and add all my nodes to the peer list. I set up my zones and vnets to match and was able to successfully ping across VMs in the .100.XXX network and my .1.XXX. The issue lies in that I can’t really access the Pfsense web gui from a VM that is on a different node as the PFsense VM or reach out to the Internet. Any ideas? Or would it be best to remove the vm all together and just stick to SDN/VXLAN?

Thank you :)
 
Here are my configs under /etc/network/interfaces:

Node1:
Code:
auto lo
iface lo inet loopback

iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.200/24
        gateway 192.168.1.254
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0

iface wlp1s0 inet manual


source /etc/network/interfaces.d/*

Node2:
Code:
auto lo
iface lo inet loopback

iface eno2 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.201/24
        gateway 192.168.1.254
        bridge-ports eno2
        bridge-stp off
        bridge-fd 0

iface wlo1 inet manual


source /etc/network/interfaces.d/*

Node3:
Code:
auto lo
iface lo inet loopback

iface eno2 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.202/24
        gateway 192.168.1.254
        bridge-ports eno2
        bridge-stp off
        bridge-fd 0

iface wlo1 inet manual


source /etc/network/interfaces.d/*

Node 4:
Code:
auto lo
iface lo inet loopback

iface enp1s0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.203/24
        gateway 192.168.1.254
        bridge-ports enp1s0
        bridge-stp off
        bridge-fd 0

iface wlo1 inet manual


source /etc/network/interfaces.d/*

SDN:
Code:
#version:11

auto vxlan
iface vxlan
        bridge_ports vxlan_vxlan
        bridge_stp off
        bridge_fd 0
        mtu 1450
        alias vxlan

auto vxlan_vxlan
iface vxlan_vxlan
        vxlan-id 99
        vxlan_remoteip 192.168.1.201
        vxlan_remoteip 192.168.1.202
        vxlan_remoteip 192.168.1.203
        mtu 1450

Router VM:
opnsense_settings.png
 
Last edited:
So far I have been able to spin up VMs on the same node (node 3) as my OPNsense VM. They reach out to the internet and ping fine. Issue is when I migrate any of the VMs to any of the other nodes (i.e. node2, node1 or node00) then it loses access to the gateway and internet. I can still ping across my internal LAN network (192.168.100.XX) just need reach my opnsense gui.

vxzone.png
vnet.png
subnet.png

Any ideas? :confused:
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!