UDP Port 53 Unreachable for Docker Pi-hole in Unprivileged LXC

LexD

New Member
Oct 23, 2025
1
0
1

Hey guys,

I'm trying to migrate from LXD to Proxmox but run into some problems.



Setup


  • Proxmox VE host
  • Ubuntu 24.04 unprivileged LXC container running Docker
  • Docker stack: Pi-hole + Unbound + Gravity Sync + Nebula
  • Docker network mode: host
  • LXC network: bridged (vmbr0)
  • Secondary Pi-hole IP: 192.168.50.160
  • Primary Pi-hole: runs natively inside an unprivileged LXD container on another host — works perfectly



Problem


DNS (UDP/53) traffic fails when the Pi-hole stack runs inside Docker in the unprivileged LXC.


From inside the Proxmox LXC (the Docker container host):
  • dig @192.168.50.160 google.com → times out
  • nslookup google.com 192.168.50.160 → times out
  • nslookup pi.hole 192.168.50.160 → works

From the Proxmox host:
  • nslookup pi.hole 192.168.50.160 → works
  • dig @192.168.50.160 pi.hole → works
  • dig @192.168.50.160 google.com → times out

From LAN clients:
  • nc -vuz 192.168.50.160 53 → succeeds (UDP port open)

But DNS queries from LAN clients still time out.

Observations​


  • Pi-hole web UI is reachable from the LAN, so TCP is fine.
  • tcpdump shows DNS queries leaving the Proxmox host, but no UDP replies coming back from the container.
  • Inside the LXC, only local DNS requests are visible; external ones never reach it.



Likely Cause​


Inbound UDP traffic to Docker services inside unprivileged LXC containers does not fully pass through, even with --network=host.
It appears AppArmor or namespace restrictions block UDP reply handling for host-networked Docker containers, which affects DNS specifically.




Questions​

  1. Is this a known limitation for Docker with --network=host inside unprivileged LXC containers on Proxmox?
  2. Is there a way to allow UDP (especially port 53) without making the LXC privileged — for example via AppArmor profile, lxc.apparmor.profile, or extra capabilities?
  3. Would switching Pi-hole to a macvlan or dedicated bridge network safely bypass this limitation?

 
Hello,

in general it's not recommended to run docker inside lxc containers since they tend to break after major updates (last time it happened with ProxmxoVE 8 to 9). The root cause is that lxcs and docker containers use part of the same mechanisms under the hood.
This is also recommended in the documentation:

If you want to run application containers, for example, Docker images, it is recommended that you run them inside a Proxmox QEMU VM. This will give you all the advantages of application containerization, while also providing the benefits that VMs offer, such as strong isolation from the host and the ability to live-migrate, which otherwise isn’t possible with containers.

That being said many people do it non the less, but be prepared to fix stuff from time to time. I myself used to run pihole with docker in a alpine lxcs for a long time without much trouble but I never felt comfortable in running a non-supported environment.
Thus I migrated that setup to a direct install of pihole+keepalived in Debian lxcs like described by @Dunuin :
https://forum.proxmox.com/threads/pi-hole-lxc-with-gravity-sync.109881/#post-645646

I changed some things though: First my install is based on Debian 13 and I not only have two but three pihole hosts . I also use nebula instead of gravity-sync. nebula can also be installed without docker but then you need to manually update it. I didn't wanted to do that so I used the supplied docker-compose file to setup the sync on my regular docker-vm where I host all my stuff I need docker for.

All of my pihole containers (the old alpine+docker and the new debian without docker setup) always were and are unpriviliged containers.