[SOLVED] UDP Port 53 Unreachable for Docker Pi-hole in Unprivileged LXC

LexD

New Member
Oct 23, 2025
3
1
3

Hey guys,

I'm trying to migrate from LXD to Proxmox but run into some problems.



Setup


  • Proxmox VE host
  • Ubuntu 24.04 unprivileged LXC container running Docker
  • Docker stack: Pi-hole + Unbound + Gravity Sync + Nebula
  • Docker network mode: host
  • LXC network: bridged (vmbr0)
  • Secondary Pi-hole IP: 192.168.50.160
  • Primary Pi-hole: runs natively inside an unprivileged LXD container on another host — works perfectly



Problem


DNS (UDP/53) traffic fails when the Pi-hole stack runs inside Docker in the unprivileged LXC.


From inside the Proxmox LXC (the Docker container host):
  • dig @192.168.50.160 google.com → times out
  • nslookup google.com 192.168.50.160 → times out
  • nslookup pi.hole 192.168.50.160 → works

From the Proxmox host:
  • nslookup pi.hole 192.168.50.160 → works
  • dig @192.168.50.160 pi.hole → works
  • dig @192.168.50.160 google.com → times out

From LAN clients:
  • nc -vuz 192.168.50.160 53 → succeeds (UDP port open)

But DNS queries from LAN clients still time out.

Observations​


  • Pi-hole web UI is reachable from the LAN, so TCP is fine.
  • tcpdump shows DNS queries leaving the Proxmox host, but no UDP replies coming back from the container.
  • Inside the LXC, only local DNS requests are visible; external ones never reach it.



Likely Cause​


Inbound UDP traffic to Docker services inside unprivileged LXC containers does not fully pass through, even with --network=host.
It appears AppArmor or namespace restrictions block UDP reply handling for host-networked Docker containers, which affects DNS specifically.




Questions​

  1. Is this a known limitation for Docker with --network=host inside unprivileged LXC containers on Proxmox?
  2. Is there a way to allow UDP (especially port 53) without making the LXC privileged — for example via AppArmor profile, lxc.apparmor.profile, or extra capabilities?
  3. Would switching Pi-hole to a macvlan or dedicated bridge network safely bypass this limitation?

 
Hello,

in general it's not recommended to run docker inside lxc containers since they tend to break after major updates (last time it happened with ProxmxoVE 8 to 9). The root cause is that lxcs and docker containers use part of the same mechanisms under the hood.
This is also recommended in the documentation:

If you want to run application containers, for example, Docker images, it is recommended that you run them inside a Proxmox QEMU VM. This will give you all the advantages of application containerization, while also providing the benefits that VMs offer, such as strong isolation from the host and the ability to live-migrate, which otherwise isn’t possible with containers.

That being said many people do it non the less, but be prepared to fix stuff from time to time. I myself used to run pihole with docker in a alpine lxcs for a long time without much trouble but I never felt comfortable in running a non-supported environment.
Thus I migrated that setup to a direct install of pihole+keepalived in Debian lxcs like described by @Dunuin :
https://forum.proxmox.com/threads/pi-hole-lxc-with-gravity-sync.109881/#post-645646

I changed some things though: First my install is based on Debian 13 and I not only have two but three pihole hosts . I also use nebula instead of gravity-sync. nebula can also be installed without docker but then you need to manually update it. I didn't wanted to do that so I used the supplied docker-compose file to setup the sync on my regular docker-vm where I host all my stuff I need docker for.

All of my pihole containers (the old alpine+docker and the new debian without docker setup) always were and are unpriviliged containers.
 
Hey Johannes, thanks for your reply!

I went ahead and set up a new LXC container with Pi-hole and Unbound installed natively, but I’m running into the exact same issue.

Both services seem to run just fine — Pi-hole even syncs properly via the Dockerized Nebula-sync on my old container, and DNS requests are showing up in Pi-hole’s query log. The problem is that Pi-hole isn’t getting any responses from Unbound (or Unbound isn’t even seeing the requests).

Starting to think something might be off on the Proxmox host itself, even though it’s a clean install.

I’m going to try installing everything on a VM next — maybe that’ll sort it out.
If it does, I’m seriously starting to lose faith in Proxmox LXC containers…
Pi-hole + Unbound on a LXD container is a total no-brainer in comparison.
 
But unbound shouldn't listen on port 53, you did move it's port to another port like explained in the tutorial? Because pihole and unbound are DNS servers (although for different usecases) by default both listen on port 53. This won't work of course. But moving unbound to another port and adding it with thus custom port as upstream port in pi-hole should work
 
Pi-hole even syncs properly via the Dockerized Nebula-sync

This was the problem all along.
Turns out my old primary still had a setupVars.conf, which Nebula-Sync was copying over and breaking the secondary.

Fix:
  • Fresh pihole-unbound install on secondary
  • In Nebula-Sync, set FULL_SYNC=false and SYNC_CONFIG_DNS=false
Now everything syncs fine without killing DNS.

So it all works in LXC-docker but i will probably take your advice and move all the docker stuff to a VM.
 
  • Like
Reactions: Johannes S