[TUTORIAL] A way to get DHCP and IPAM working with a VXLAN zone with minimal extra configuration

mhdo

New Member
Jul 30, 2025
2
0
1
[Hope I'm not doing anything wrong here, but this post was awaiting admin approval before it could be shown publicly and I'm not quite sure why, please let me know if this breaks any rules]

Hi, I've been scouring the forums for a solution that would allow me to use the DHCP plugin (only for simple zones as of 8.4) to work with other zones without luck. My use case was relatively simple: I wanted DHCP, and I also wanted the IPAM integration, and I needed it to work across nodes, preferably without needing to configure anything outside of Proxmox. This is basically the only network that I needed to set up, and I haven't tested it extremely extensively, but it did acomplish my goals: the VMs were able to communicate and DHCP/IPAM works as expected. I wanted to share my configs in case others are interested in a sensible enough solution while the official thing is being worked on, and I also hope to get some feedback about how this configuration may cause unintended behavior, since I'm honestly not the most well-versed in all this virtual networking business.

The idea is simple: we set up a simple zone so that the DHCP plugin works exactly as expected, and we also set up a VXLAN zone since we want connectivity between VMs on different nodes. The extra step that we need to take is to create a veth link between the simple bridge and the VXLAN bridge, so that the VMs see each other with the same IP assigned by DHCP. In detail:

0. Install dnsmasq on each machine according to the documentation.
Bash:
apt update
apt install dnsmasq
# disable default instance
systemctl disable --now dnsmasq
1. Create a simple zone and enable automatic DHCP. Create a VNet inside this zone (I did not enable VLAN Aware or Isolate Port, not sure if this works with either of those).
2. Create a Subnet in the VNet, configure your CIDR, gateway and DHCP ranges, and I can confirm this works fine with SNAT.
3. Create a VXLAN zone. Create a VNet inside this zone (I did not enable VLAN Aware or Isolate Port, not sure if this works with either of those).
4. Apply the changes through the interface.
5. Edit the network configuration on each machine. If you don't want the changes to be persistent, run these commands:
Bash:
#!/bin/bash
ip link add head type veth peer name tail
brctl addif [simple vnet name] head
brctl addif [vxlan vnet name] tail
ip link set dev head up
ip link set dev tail up
Replace the [...] with the names you chose when setting up the networks. "head" and "tail" are completely arbitrary names for the 2 ends of the veth link. The script above hooks the simple zone to the VXLAN zone, so that the VMs will actually be connected.

For a persistent configuration, to not interfere with future reconfigurations, I advise putting this in "/etc/network/interfaces.d/sdn-custom" (the file name kinda matters, make sure it gets processed by ifupdown after the default file used by Proxmox "/etc/network/interfaces.d/sdn"). We do the same thing as above, but this time through the config file:
Code:
auto head
iface head inet manual
        link-type veth
        veth-peer-name tail

auto tail
iface tail inet manual
        link-type veth
        veth-peer-name head

iface [simple vnet name] inet manual
        bridge_ports head

iface [vxlan vnet name] inet manual
        bridge_ports tail
To immediately apply the change, run:
Code:
ifreload -a

6. Create VMs that are connected to the simple VNet (not the VXLAN one) and you're done!

Some notes:
- I tested connectivity using a few Ubuntu noble cloud-init VMs with the ip=dhcp option. They were able to ping each other, and I also tried setting up a local http server through python (python3 -m http.server) on one and access it on the other (wget [put IP here]:8000), which worked perfectly! I wasn't able to SSH from one to the other, but I'm pretty sure that's just because the cloud-init instance doesn't come with SSH enabled if it hasn't been configured for SSH. Will test this with a Windows VM soon, but I can't think of any reason why this would suddenly break (aside from me forgetting to turn off firewalls).
- My setup consists of 5 old office machines (H61, i3-2120, 4GB of RAM, 60GB SSD) connceted to a 16-port LINKSYS SD216 (unmanaged switch if I'm not mistaken) that itself is connected to a router for internet access. The VMs still connect to each other without the router, and they can all access the internet with the router if SNAT is enabled, so I expect this to work with any type of access point. The overhead seems very minimal, if any, compared to just using VXLAN.
- If a gateway is configured, using the gateway IP, each machine will only be able to connect to the node it is on, since the packets will stop there instead of being passed to the VXLAN. Configuring a separate IP for each of the nodes should be possible, but it's more work and I didn't bother since don't want my machines to connect to any of the nodes anyway.


Please let me know your thoughts on this setup. I think it's probably enough for many use cases, but it would also be nice to know its limits. If there aren't actually any limitations, then that'd be great, since I think it's easy enough to configure (and isn't very complicated to integrate if the devs are reading). I haven't looked into how the firewall will behave with these options, but I think it'll probably work as expected? Hope this was helpful!

I forgot to mention this but there's also a good chance this works exactly the same way with the other zones, but I haven't tested anything besides VXLAN so I can't confirm anything there.
 
Last edited:
Small update and a bit of a follow up: the setup works for the most part, but I believe it’s experiencing some weird MTU issues that I’m not quite sure about. It is definitely a fragmentation problem, since pinging stops working when I had to perform a relatively large upload, but curl still works fine. I believe that it would be necessary to lower the MTU of the simple zone down to 1450, and I will try to confirm if this works.