[SOLVED] UDP Port 53 Unreachable for Docker Pi-hole in Unprivileged LXC

LexD

New Member
Oct 23, 2025
3
1
3

Hey guys,

I'm trying to migrate from LXD to Proxmox but run into some problems.



Setup


  • Proxmox VE host
  • Ubuntu 24.04 unprivileged LXC container running Docker
  • Docker stack: Pi-hole + Unbound + Gravity Sync + Nebula
  • Docker network mode: host
  • LXC network: bridged (vmbr0)
  • Secondary Pi-hole IP: 192.168.50.160
  • Primary Pi-hole: runs natively inside an unprivileged LXD container on another host — works perfectly



Problem


DNS (UDP/53) traffic fails when the Pi-hole stack runs inside Docker in the unprivileged LXC.


From inside the Proxmox LXC (the Docker container host):
  • dig @192.168.50.160 google.com → times out
  • nslookup google.com 192.168.50.160 → times out
  • nslookup pi.hole 192.168.50.160 → works

From the Proxmox host:
  • nslookup pi.hole 192.168.50.160 → works
  • dig @192.168.50.160 pi.hole → works
  • dig @192.168.50.160 google.com → times out

From LAN clients:
  • nc -vuz 192.168.50.160 53 → succeeds (UDP port open)

But DNS queries from LAN clients still time out.

Observations​


  • Pi-hole web UI is reachable from the LAN, so TCP is fine.
  • tcpdump shows DNS queries leaving the Proxmox host, but no UDP replies coming back from the container.
  • Inside the LXC, only local DNS requests are visible; external ones never reach it.



Likely Cause​


Inbound UDP traffic to Docker services inside unprivileged LXC containers does not fully pass through, even with --network=host.
It appears AppArmor or namespace restrictions block UDP reply handling for host-networked Docker containers, which affects DNS specifically.




Questions​

  1. Is this a known limitation for Docker with --network=host inside unprivileged LXC containers on Proxmox?
  2. Is there a way to allow UDP (especially port 53) without making the LXC privileged — for example via AppArmor profile, lxc.apparmor.profile, or extra capabilities?
  3. Would switching Pi-hole to a macvlan or dedicated bridge network safely bypass this limitation?

 
Hello,

in general it's not recommended to run docker inside lxc containers since they tend to break after major updates (last time it happened with ProxmxoVE 8 to 9). The root cause is that lxcs and docker containers use part of the same mechanisms under the hood.
This is also recommended in the documentation:

If you want to run application containers, for example, Docker images, it is recommended that you run them inside a Proxmox QEMU VM. This will give you all the advantages of application containerization, while also providing the benefits that VMs offer, such as strong isolation from the host and the ability to live-migrate, which otherwise isn’t possible with containers.

That being said many people do it non the less, but be prepared to fix stuff from time to time. I myself used to run pihole with docker in a alpine lxcs for a long time without much trouble but I never felt comfortable in running a non-supported environment.
Thus I migrated that setup to a direct install of pihole+keepalived in Debian lxcs like described by @Dunuin :
https://forum.proxmox.com/threads/pi-hole-lxc-with-gravity-sync.109881/#post-645646

I changed some things though: First my install is based on Debian 13 and I not only have two but three pihole hosts . I also use nebula instead of gravity-sync. nebula can also be installed without docker but then you need to manually update it. I didn't wanted to do that so I used the supplied docker-compose file to setup the sync on my regular docker-vm where I host all my stuff I need docker for.

All of my pihole containers (the old alpine+docker and the new debian without docker setup) always were and are unpriviliged containers.
 
Hey Johannes, thanks for your reply!

I went ahead and set up a new LXC container with Pi-hole and Unbound installed natively, but I’m running into the exact same issue.

Both services seem to run just fine — Pi-hole even syncs properly via the Dockerized Nebula-sync on my old container, and DNS requests are showing up in Pi-hole’s query log. The problem is that Pi-hole isn’t getting any responses from Unbound (or Unbound isn’t even seeing the requests).

Starting to think something might be off on the Proxmox host itself, even though it’s a clean install.

I’m going to try installing everything on a VM next — maybe that’ll sort it out.
If it does, I’m seriously starting to lose faith in Proxmox LXC containers…
Pi-hole + Unbound on a LXD container is a total no-brainer in comparison.
 
But unbound shouldn't listen on port 53, you did move it's port to another port like explained in the tutorial? Because pihole and unbound are DNS servers (although for different usecases) by default both listen on port 53. This won't work of course. But moving unbound to another port and adding it with thus custom port as upstream port in pi-hole should work
 
Pi-hole even syncs properly via the Dockerized Nebula-sync

This was the problem all along.
Turns out my old primary still had a setupVars.conf, which Nebula-Sync was copying over and breaking the secondary.

Fix:
  • Fresh pihole-unbound install on secondary
  • In Nebula-Sync, set FULL_SYNC=false and SYNC_CONFIG_DNS=false
Now everything syncs fine without killing DNS.

So it all works in LXC-docker but i will probably take your advice and move all the docker stuff to a VM.
 
  • Like
Reactions: Johannes S
Hey Johannes, thanks for your reply!

I went ahead and set up a new LXC container with Pi-hole and Unbound installed natively, but I’m running into the exact same issue.

Both services seem to run just fine — Pi-hole even syncs properly via the Dockerized Nebula-sync on my old container, and DNS requests are showing up in Pi-hole’s query log. The problem is that Pi-hole isn’t getting any responses from Unbound (or Unbound isn’t even seeing the requests).

Starting to think something might be off on the Proxmox host itself, even though it’s a clean install.

I’m going to try installing everything on a VM next — maybe that’ll sort it out.
If it does, I’m seriously starting to lose faith in Proxmox LXC containers…
Pi-hole + Unbound on a LXD container is a total no-brainer in comparison.
Did you find a solution?
I have 2 piholes in LXC om pve with keepalived.
On initial startup it works, but if I kill the master, backup takes over, but port 53 is blocked (udp & tcp) while port 80 & 22 is no issue at all.
Restarting master, does bring the master back in control, but often still no connection to port 53.
At the same time, I can dns query from master to backup and viceversa....or from any LAN client to pihole, but not via the virtual IP.
So it is something from virtual IP to pihole.....still no clue....

I don't have Nebula sync...

Thanks
Hans
 
I have 2 piholes in LXC om pve with keepalived.
On initial startup it works, but if I kill the master, backup takes over, but port 53 is blocked (udp & tcp) while port 80 & 22 is no issue at all.
Restarting master, does bring the master back in control, but often still no connection to port 53.

This sounds like a firewall is blocking port 53, are you sure that your ISP, your router or yourself didn't blocked it somehow?
There is also a great (although a little older) tutorial from @Dunuin in the forum how to setup Pihole together with unbounds in Debian lxcs without docker:

Please note that not all of it is applicable anymore (some things changed with the latest pihole versions, nebula-sync doesn't work anymore but gravity-sync should do) but to get a working setup it should still be enough. It was at least for me (with a little tweaking to reflect changes to pihole+ProxmoxVE+Debian in the last years).
 
This sounds like a firewall is blocking port 53, are you sure that your ISP, your router or yourself didn't blocked it somehow?
There is also a great (although a little older) tutorial from @Dunuin in the forum how to setup Pihole together with unbounds in Debian lxcs without docker:

Please note that not all of it is applicable anymore (some things changed with the latest pihole versions, nebula-sync doesn't work anymore but gravity-sync should do) but to get a working setup it should still be enough. It was at least for me (with a little tweaking to reflect changes to pihole+ProxmoxVE+Debian in the last years).
Thanks for reply.
It is not the ISP. I am testing from a pve shell and a vm on pve and a laptop client, all on same LAN....querying the 2 pihole instances works fine, querying the virtual ip from keepalived does not work (only for port 53, 22 & 80 remains working).
I do see a few others with similar issues, but no solution yet.
 
It is not the ISP. I am testing from a pve shell and a vm on pve and a laptop client, all on same LAN....querying the 2 pihole instances works fine, querying the virtual ip from keepalived does not work (only for port 53, 22 & 80 remains working).

Do you even get the virtual IP on one of the piholes? If not I would suspect something is odd with your keepalived configuration
 
Do you even get the virtual IP on one of the piholes? If not I would suspect something is odd with your keepalived configuration
I can ping to the virtual IP; I can ssh to the virtual IP (and get the expected pihole instance - master or backup) and I can browse to pihole admin page....The only thing which does not work is port 53. I also forced dns query on TCP (as default is UDP)....no luck either.
 
It would be also great if you could do the nslook up tests and dig test LexD did and post the results here. You will need to replace the IP with your virtual IP and your pihole container IPs (so we can see whether this problem is with the virtual IP, one or both lxc containers).
 
It would be also great if you could do the nslook up tests and dig test LexD did and post the results here. You will need to replace the IP with your virtual IP and your pihole container IPs (so we can see whether this problem is with the virtual IP, one or both lxc containers).
LXC pihole1 = 192.168.1.253
LXC pihole2 = 192.168.1.252
Virtual IP = 192.168.1.251

Tests from a Debian VM shell on PVE (and same results if done directly from PVE host shell)

dig @192.168.1.253 google.com
; <<>> DiG 9.18.41-1~deb12u1-Debian <<>> @192.168.1.253 google.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43901
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
<deleted stuff, successfull>

dig @192.168.1.252 google.com
; <<>> DiG 9.18.41-1~deb12u1-Debian <<>> @192.168.1.252 google.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9107
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
<deleted stuff, successfull>

dig @192.168.1.251 google.com
;; communications error to 192.168.1.251#53: connection refused
;; communications error to 192.168.1.251#53: connection refused
;; communications error to 192.168.1.251#53: connection refused
; <<>> DiG 9.18.41-1~deb12u1-Debian <<>> @192.168.1.251 google.com
; (1 server found)
;; global options: +cmd
;; no servers could be reached

ssh 192.168.1.251
Enter passphrase for key '/home/user/.ssh/id_ed25519':

telnet 192.168.1.251 80
Trying 192.168.1.251...
Connected to 192.168.1.251.
Escape character is '^]'.
HTTP/1.0 400 Bad Request
Cache-Control: no-cache, no-store, must-revalidate, private, max-age=0
Expires: 0
Pragma: no-cache
X-DNS-Prefetch-Control: off
Content-Security-Policy: default-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' data:;
X-Frame-Options: DENY
<deleted>

nslookup google.com 192.168.1.251
;; communications error to 192.168.1.251#53: connection refused


Tests from a LXC pihole1 on PVE
dig @192.168.1.251 google.com
;; communications error to 192.168.1.251#53: connection refused
;; communications error to 192.168.1.251#53: connection refused
;; communications error to 192.168.1.251#53: connection refused

dig @192.168.1.252 google.com
; <<>> DiG 9.18.41-1~deb12u1-Debian <<>> @192.168.1.252 google.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 519
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

telnet 192.168.1.251 22
Trying 192.168.1.251...
Connected to 192.168.1.251.
Escape character is '^]'.
SSH-2.0-OpenSSH_9.2p1 Debian-2+deb12u7

nslookup google.com 192.168.1.251
;; communications error to 192.168.1.251#53: connection refused
 
Thanks, have you enabled Proxmox internal firewall? Can you please post the output of this commands:

Code:
pve-firewall status
iptables-save
iptables -L

And please post the content of this files:
  • cat /etc/pve/firewall/cluster.fw
  • cat /etc/pve/nodes/node-name/host.fw

I suspect that the firewall blocks DNS since your description seems to match the documentation of the default rules: https://pve-node2.private.jstarosta...-pve-firewall.html#pve_firewall_default_rules