LXCs in a VLAN don't get DNS automatically

TheCrowFather

Member
Mar 22, 2021
21
2
8
blog.gravitywall.net
LXC containers not receiving DHCP DNS details on VLAN

I have a VLAN set up for external (DMZ) containers.

If I set a VLAN ID on an LXC container it will not pull the DNS settings automatically. I must manually specify that the container should use the gateway for DNS.

This seems to only happen with LXC containers though, VMs on VLANs get a DNS address automatically.

Anyone have an any ideas why LXC containers aren't receiving a DNS address automatically?

Here's the interfaces for reference.


Code:
auto lo
    iface lo inet loopback
    
    iface eno1 inet manual
    
    auto vmbr0
    iface vmbr0 inet static
            address 192.168.1.156/24
            gateway 192.168.1.1
            bridge-ports eno1
            bridge-stp off
            bridge-fd 0
            bridge-vlan-aware yes
            bridge-vids 2-4094
 
Is your DHCP (and DNS) server listening on that same VLAN? If not, then it won't see the network packets and won't respond. There is a sub-forum specifically for network problems, maybe people over there have more experience about this.
 
I think this is an effect of the network model used for containers, unlike VM's where the object is to emulate physical machines, containers are intended to be lightweight and use more of the host's subsystems to deliver their services.

From linuxcontainers.org
"lxc.net..ipv4.gatewaySpecify the ipv4 address to use as the gateway inside the container. The address is in format x.y.z.t, eg. 192.168.1.123. Can also have the special value auto, which means to take the primary address from the bridge interface (as specified by the lxc.net..link option) and use that as the gateway. auto is only available when using the veth, macvlan and ipvlan network types. Can also have the special value of dev, which means to set the default gateway as a device route. This is primarily for use with layer 3 network modes, such as IPVLAN."

I suspect the 'auto' behaviour is what is causing the failure to acquire a DHCP lease as the gateway needs to be assigned correctly on the broadcast for the DHCP server to determine which VLAN scope to use. As you have found, the workaround is to override the host gateway on the container setup. Setting up a Linux VLAN in proxmox and assigning that to the container may also be a solution.
 
I think this is an effect of the network model used for containers, unlike VM's where the object is to emulate physical machines, containers are intended to be lightweight and use more of the host's subsystems to deliver their services.

From linuxcontainers.org
"lxc.net..ipv4.gatewaySpecify the ipv4 address to use as the gateway inside the container. The address is in format x.y.z.t, eg. 192.168.1.123. Can also have the special value auto, which means to take the primary address from the bridge interface (as specified by the lxc.net..link option) and use that as the gateway. auto is only available when using the veth, macvlan and ipvlan network types. Can also have the special value of dev, which means to set the default gateway as a device route. This is primarily for use with layer 3 network modes, such as IPVLAN."

I suspect the 'auto' behaviour is what is causing the failure to acquire a DHCP lease as the gateway needs to be assigned correctly on the broadcast for the DHCP server to determine which VLAN scope to use. As you have found, the workaround is to override the host gateway on the container setup. Setting up a Linux VLAN in proxmox and assigning that to the container may also be a solution.
The container receives a DHCP lease just fine, it doesn't get the DNS details I have set in the DHCP settings.
 
Containers by default use the host DNS settings. You need to set a DNS in the DNS tab of the container settings, or else IIRC you can touch /etc/.pve-ignore.resolv.conf inside the container to disable the host setting it on startup.
I think the "host dns setting" is where the issue is. The host is on a different vlan than the containers. So the containers are getting the address of a DNS server they can't reach.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!