Trouble getting OPNSense's DHCP to work across a Proxmox cluster

SeigneurAo

New Member
Apr 4, 2026
2
1
3
Hello !

Setup :
  • 3 nodes cluster (important, nodes are on DIFFERENT physical sites, and I'm assuming no private network between them, to be extra cautious), added all nodes in a SDN vxlan zone + created a VNet (subnet 10.6.6.0/24 with gateway = 10.6.6.1)
  • installed OPNSense in a VM on node 1, with net0 = vmbr1 (public nic/IP) and net1 = vxnet (previously created VNet). In OPNSense, LAN = vtnet1 and WAN = vtnet0
  • in OPNSense GUI, I changed the Dnsmasq range accordingly to the VNet subnet : 10.6.6.1 to 10.6.6.245, and for the LAN interface, put IPv4 Configuration Type = Static IPv4 and IPv4 address = 10.6.6.1/24

Issue :
When I try to install a VM (Debian 13) on node 2, DHCP network setup fails.

Any pointers would be appreciated, thanks in advance !
 
Last edited:
The DHCP broadcasts from the VM on node 2 need to reach OPNsense on node 1 through the VXLAN tunnel. A couple of things to check.

First, verify the SDN zone is active on all nodes by running "pvesh get /cluster/sdn/vnets" and confirm the VNet shows up on both node 1 and node 2. Then check that the VXLAN FDB entries are populated: "bridge fdb show dev vxlan_yourzone" should show MAC entries pointing to the other nodes' IPs. If the FDB is empty the VXLAN tunnel is not learning and broadcasts will not cross.

Second, on OPNsense make sure the LAN firewall rules explicitly allow UDP ports 67 and 68 inbound. New interfaces in OPNsense default to blocking everything except the anti-lockout rule, so DHCP requests from the VNet side could be silently dropped.

Third, since your nodes are on different physical sites with no private network between them, confirm the VXLAN transport is using the public IPs and that UDP port 4789 (VXLAN default) is open in both directions between the nodes.
 
The existing suggestion about VXLAN FDB entries points in the right direction but there is a subtlety specific to Proxmox SDN VXLAN that is often the actual culprit here.
DHCP discovery is a broadcast (UDP destination 255.255.255.255, source 0.0.0.0) and in a VXLAN SDN setup, that broadcast needs to be flooded from node 2 to all other nodes in the same VXLAN segment, specifically to reach OPNsense on node 1. Proxmox SDN supports two flood modes depending on how the zone is configured.
Check first whether multicast is enabled for the SDN zone. On any node run:
pvesh get /cluster/sdn/zones
Look at the "multicast-address" field for your zone. If multicast is not configured, VXLAN only sends unicast to known MAC addresses. Since a DHCP Discover from a new VM has no known MAC entry yet, it will not be forwarded to node 1 at all, and OPNsense never sees it.
To verify what is actually in the VXLAN flooding table on node 2, run:
bridge fdb show dev vxlan
You should see the MAC address of OPNsense's VNet interface or a broadcast entry for node 1's tunnel IP. If you only see local entries, broadcasts from node 2 are not reaching node 1.
Two fixes depending on your setup:
1. If your inter-node network supports multicast (same datacenter, same L2), add a multicast address to the SDN zone in Datacenter > SDN.
2. If nodes are across different sites over a routed network (which does not pass multicast), switch the SDN zone type to EVPN, which uses BGP to distribute MAC/IP information instead of relying on flood-and-learn.
Also confirm on node 2 that the VNet interface exists and is UP:
ip link show | grep vnet
If the VNet interface is missing on node 2, run "Apply" in Datacenter > SDN to push the config to all nodes.
 
The existing suggestion about VXLAN FDB entries points in the right direction but there is a subtlety specific to Proxmox SDN VXLAN that is often the actual culprit here.
DHCP discovery is a broadcast (UDP destination 255.255.255.255, source 0.0.0.0) and in a VXLAN SDN setup, that broadcast needs to be flooded from node 2 to all other nodes in the same VXLAN segment, specifically to reach OPNsense on node 1. Proxmox SDN supports two flood modes depending on how the zone is configured.
Check first whether multicast is enabled for the SDN zone. On any node run:
pvesh get /cluster/sdn/zones
Look at the "multicast-address" field for your zone. If multicast is not configured, VXLAN only sends unicast to known MAC addresses. Since a DHCP Discover from a new VM has no known MAC entry yet, it will not be forwarded to node 1 at all, and OPNsense never sees it.
To verify what is actually in the VXLAN flooding table on node 2, run:
bridge fdb show dev vxlan
You should see the MAC address of OPNsense's VNet interface or a broadcast entry for node 1's tunnel IP. If you only see local entries, broadcasts from node 2 are not reaching node 1.
Two fixes depending on your setup:
1. If your inter-node network supports multicast (same datacenter, same L2), add a multicast address to the SDN zone in Datacenter > SDN.
2. If nodes are across different sites over a routed network (which does not pass multicast), switch the SDN zone type to EVPN, which uses BGP to distribute MAC/IP information instead of relying on flood-and-learn.
Also confirm on node 2 that the VNet interface exists and is UP:
ip link show | grep vnet
If the VNet interface is missing on node 2, run "Apply" in Datacenter > SDN to push the config to all nodes.

Was this text AI-generated? Asking because there is no multicast-address field in the command output here.