[SOLVED] Need help implenting UNBOUND for local DNS resolver

cglmicro

Member
Oct 12, 2020
98
11
13
51
Hi.

I tried to follow instruction's in https://forum.proxmox.com/threads/uribl_blocked-however-uribl-com-shows-its-not.76825/#post-478336, but I'm not able to make it work.

What I need is to make unbound work in my PMG, so I don't over use URLBL and get BLOCKED.

Code:
root@pmg14:~# dig a proxmox.com @127.0.0.1 +short

; <<>> DiG 9.16.27-Debian <<>> a proxmox.com @127.0.0.1 +short
;; global options: +cmd
;; connection timed out; no servers could be reached

Code:
root@pmg14:~# ss -tulnp | grep ":53.*unbound"
udp   UNCONN 0      0          127.0.0.1:53         0.0.0.0:*    users:(("unbound",pid=41443,fd=5))                                                                         
udp   UNCONN 0      0              [::1]:53            [::]:*    users:(("unbound",pid=41443,fd=3))                                                                         
tcp   LISTEN 0      256        127.0.0.1:53         0.0.0.0:*    users:(("unbound",pid=41443,fd=6))                                                                         
tcp   LISTEN 0      256            [::1]:53            [::]:*    users:(("unbound",pid=41443,fd=4))                                                                         

or the long version with systemd still running until I resolve the dig :
root@pmg14:~# ss -tulnp | grep ":53"
udp   UNCONN 0      0          127.0.0.1:53         0.0.0.0:*    users:(("unbound",pid=41443,fd=5))                                                                         
udp   UNCONN 0      0      127.0.0.53%lo:53         0.0.0.0:*    users:(("systemd-resolve",pid=76,fd=16))                                                                   
udp   UNCONN 0      0            0.0.0.0:5355       0.0.0.0:*    users:(("systemd-resolve",pid=76,fd=11))                                                                   
udp   UNCONN 0      0              [::1]:53            [::]:*    users:(("unbound",pid=41443,fd=3))                                                                         
udp   UNCONN 0      0               [::]:5355          [::]:*    users:(("systemd-resolve",pid=76,fd=13))                                                                   
tcp   LISTEN 0      256        127.0.0.1:53         0.0.0.0:*    users:(("unbound",pid=41443,fd=6))                                                                         
tcp   LISTEN 0      4096   127.0.0.53%lo:53         0.0.0.0:*    users:(("systemd-resolve",pid=76,fd=17))                                                                   
tcp   LISTEN 0      4096         0.0.0.0:5355       0.0.0.0:*    users:(("systemd-resolve",pid=76,fd=12))                                                                   
tcp   LISTEN 0      256            [::1]:53            [::]:*    users:(("unbound",pid=41443,fd=4))                                                                         
tcp   LISTEN 0      4096            [::]:5355          [::]:*    users:(("systemd-resolve",pid=76,fd=14))

I received a sufggestion to:
sounds odd - things I'd look into:
* do you have some iptables/nftable rules preventing communication on 127.0.0.1? - I'd suggest to let traffic on `lo` simply pass
** systemd-resolved might cause some issues with that setup - (despite it shouldn't since it's listening to 127.0.0.53..., but who knows)
*** unless you need it and know you need it - I'd suggest trying to remove it
**** also make sure that your /etc/resolv.conf is correct (should not matter for the test with `dig` though)

* : I tried to add this to nft, but I'm not sure it was the right synthax:
Code:
nft insert rule inet filter input tcp dport 53 counter accept
nft insert rule inet filter input udp dport 53 counter accept

** : I tried to systemctl stop systemd-resolved, but no chance

*** : Sure, as soon has it work, I'll STOP and DISABLE systemd-resolverd

**** : Also will configure this file when it work

Thank you for your help.
 
** : I tried to systemctl stop systemd-resolved, but no chance

*** : Sure, as soon has it work, I'll STOP and DISABLE systemd-resolverd

**** : Also will configure this file when it work
I'm not sure I understand this - does everything work when you stop and disable systemd-resolved?
If yes - why not leave it stopped and disabled?
 
* : I tried to add this to nft, but I'm not sure it was the right synthax:
If you don't have any nftable rules - there is no need to add some - traffic is not blocked unless you block it...

What's the /etc/resolv.conf on the system?

what's in the logs of unbound? (journal)
enable debugging logs for unbound
 
** UPDATE
If I disable the firewall at the datacenter level (OVH), not the one in my PVE nor in my PMG, it work !! I'll try to find the right rule to add
**

Hi.
The only nftable rules I have are the one created by Fail2Ban, or the one delivered with PMG (if there are rules delivered with it).

The content of the resolv.conf, even if the "dig a proxmox.com @127.0.0.1 +short" supersede it is:
Code:
root@pmg14:~# cat /etc/resolv.conf
# --- BEGIN PVE ---
search cdns.ovh.net
nameserver 213.186.33.99
# --- END PVE ---

Here is the content of my unbound.conf, with some lines added to log queries:
Code:
# Unbound configuration file for Debian.
#
# See the unbound.conf(5) man page.
#
# See /usr/share/doc/unbound/examples/unbound.conf for a commented
# reference config file.
#
# The following line includes additional configuration files from the
# /etc/unbound/unbound.conf.d directory.
include-toplevel: "/etc/unbound/unbound.conf.d/*.conf"

server:
    chroot: ""
    logfile: /var/log/unbound.log
    verbosity: 1
    log-queries: yes

And here is what the log file is reporting when I do a query:
Code:
root@pmg14:/etc/unbound# dig a proxmox.com @127.0.0.1 +short

; <<>> DiG 9.16.27-Debian <<>> a proxmox.com @127.0.0.1 +short
;; global options: +cmd
;; connection timed out; no servers could be reached

root@pmg14:/etc/unbound# tail /var/log/unbound.log
[1655817679] unbound[214629:0] notice: init module 0: subnet
[1655817679] unbound[214629:0] notice: init module 1: validator
[1655817679] unbound[214629:0] notice: init module 2: iterator
[1655817679] unbound[214629:0] info: start of service (unbound 1.13.1).
[1655817687] unbound[214629:0] info: 127.0.0.1 proxmox.com. A IN
[1655817692] unbound[214629:0] info: 127.0.0.1 proxmox.com. A IN
[1655817697] unbound[214629:0] info: 127.0.0.1 proxmox.com. A IN

I might miss something in the unbound.conf file, I'll try to find out while I'm waiting for your answer.

Thank you.
 
Last edited:
[1655817679] unbound[214629:0] notice: init module 2: iterator [1655817679] unbound[214629:0] info: start of service (unbound 1.13.1). [1655817687] unbound[214629:0] info: 127.0.0.1 proxmox.com. A IN [1655817692] unbound[214629:0] info: 127.0.0.1 proxmox.com. A IN
The request arrives at unbound...

So I could imagine that your ovh firewall is blocking port 53 requests to the outside world (which unbound needs to do ... )

Code:
Code:
root@pmg14:~# cat /etc/resolv.conf
# --- BEGIN PVE ---
search cdns.ovh.net
nameserver 213.186.33.99
# --- END PVE ---
once unbound is running you need to change the nameserver line to `nameserver 127.0.0.1`
 
Ok, almost there... I needed to add two rules in OVH firewall: TCP and UDP for all adresses with source port 53.

Unbound is now working:
Code:
[1655820357] unbound[273131:0] info: 127.0.0.1 1.15.105.148.in-addr.arpa. PTR IN
[1655820358] unbound[273131:0] info: 127.0.0.1 mail1.sea91.rsgsv.net. A IN
[1655820360] unbound[273131:0] info: 127.0.0.1 c301.cloudmark.com. A IN
[1655820361] unbound[273131:0] info: 127.0.0.1 sm.hetrixtools.net. A IN
[1655820361] unbound[273131:0] info: 127.0.0.1 sm.hetrixtools.net. AAAA IN
[1655820370] unbound[273131:0] info: 127.0.0.1 128.180.56.149.in-addr.arpa. PTR IN
[1655820371] unbound[273131:0] info: 127.0.0.1 smart.legardeur.net. A IN
[1655820373] unbound[273131:0] info: 127.0.0.1 108.174.154.204.in-addr.arpa. PTR IN
[1655820374] unbound[273131:0] info: 127.0.0.1 n174-108.cyberimpact.com. A IN
[1655820374] unbound[273131:0] info: 127.0.0.1 iga-crevier.com. MX IN
[1655820375] unbound[273131:0] info: 127.0.0.1 c301.cloudmark.com. A IN
[1655820378] unbound[273131:0] info: 127.0.0.1 128.180.56.149.in-addr.arpa. PTR IN
[1655820378] unbound[273131:0] info: 127.0.0.1 smart.legardeur.net. A IN
[1655820382] unbound[273131:0] info: 127.0.0.1 222.195.42.100.in-addr.arpa. PTR IN
[1655820382] unbound[273131:0] info: 127.0.0.1 o222.mail.robly.com. A IN
[1655820382] unbound[273131:0] info: 127.0.0.1 sablageaujet.ca. MX IN
[1655820383] unbound[273131:0] info: 127.0.0.1 196.19.217.144.in-addr.arpa. PTR IN
[1655820383] unbound[273131:0] info: 127.0.0.1 ssd1.legardeur.net. A IN
[1655820383] unbound[273131:0] info: 127.0.0.1 c302.cloudmark.com. A IN
[1655820423] unbound[273131:0] info: 127.0.0.1 sm.hetrixtools.net. A IN
[1655820423] unbound[273131:0] info: 127.0.0.1 sm.hetrixtools.net. AAAA IN
[1655820438] unbound[273131:0] info: 127.0.0.1 123.236.135.159.in-addr.arpa. PTR IN
[1655820438] unbound[273131:0] info: 127.0.0.1 delivery21.soundest.email. A IN

But URIBL still block my queries with URIBL_BLOCKED(0.001) :
Code:
root@pmg14:/etc/unbound# tail -f -n 100 /var/log/mail.log | grep -i --color="always" "uribl_"
Jun 21 10:06:25 pmg14 pmg-smtp-filter[316807]: 122C5662B1D05EEAA11: SA score=3/5 time=2.185 bayes=0.00 autolearn=no autolearn_force=no hits=AWL(-1.811),BAYES_00(-1.9),DKIM_SIGNED(0.1),DKIM_VALID(-0.1),ENA_SUBJ_LONG_WORD(2.2),HEADER_FROM_DIFFERENT_DOMAINS(0.25),HTML_MESSAGE(0.001),KAM_TRACKIMAGE(0.2),RAZOR2_CF_RANGE_51_100(2.89),RAZOR2_CHECK(1.92),RCVD_IN_MSPIKE_H2(-0.001),SPF_HELO_NONE(0.001),SPF_PASS(-0.001),T_SCC_BODY_TEXT_LINE(-0.01),URIBL_BLOCKED(0.001),URI_TRUNCATED(0.001)

Any idea why?
 
Problem solved !

I was still BLOCKED since my IP was already over quota with URIBL. Even if new requests where cached, I had to wait a few hours to be able to query again.

To help others, here is a quick tuto on how to enable and test unbound on PMG running in PVE:
Code:
At datacenter level (OVH for me), modify the firewall to allow UDP and TCP from any source IP, but for source port 53 to any destination port and IP.

Install:
apt install unbound dnsutils

Modify the file /etc/unbound/unbound.conf to add this at the end (for the test period):
server:
    chroot: ""
    logfile: /var/log/unbound.log
    verbosity: 1
    log-queries: yes
    log-replies: yes

Create the file:
touch /var/log/unbound.log

And assign group and owner:
chown unbound:unbound /var/log/unbound.log

Restart unbound:
systemctl restart unbound

Stop and disable systemd=resolved:
systemctl stop systemd-resolved
systemctl disable systemd-resolved

In my case, this PMG run has a container on PVE, so it's useless to modify /etc/resolv.conf on the VM since it's overwritten by PVE at each reboot. I needed to add the 127.0.0.1 dns server in PVE > (MY VM) > DNS  DNS SERVERS where you can add "127.0.0.1,213.186.33.99" where 127.0.0.1 is the primary and 213.186.33.99 is the secondaty (this is OVH dns server). Restart your PMG for the change to be effective.

If your PMG is a real VM (not a container), you can simply add this line to /etc/resolv.conf BEFORE the line of your actual name server:
nameserver 127.0.0.1 

Try a DNS request:
dig proxmox.com +short

If you don't get an IP address instantly, and you receive a timeout after a few seconds, you probably have to work on the firewall again.

And see if unbound server receive it:
tail -f /var/log/unbound.log

If it work, edit the /etc/unbound/unbound.conf again to disable extra logging: 
server:
    chroot: ""
    logfile: /var/log/unbound.log
    verbosity: 0
    log-queries: no
    log-replies: no

Restart unbound:
systemctl restard unbound

In a few hours you should see lines containing "uribl_grey" and "uribl_black" instead of "uribl_blocked" with the command:
tail -f -n 1000 /var/log/mail.log | grep -i --color="always" "uribl_"

If you want to see if the unbound cache work, you can enter this command and look for the TTL that should go down by 2 every 2 seconds:
watch dig proxmox.com

And if you want to see the content of the unbound cache:
unbound-control dump_cache

This it, thanks again for your help Stoiko !!
 
Nice - thanks for the write-up - and I'm glad you found what's at the core of the issue :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!