Pi-hole LXC with gravity-sync?

Dunuin

Distinguished Member
Jun 30, 2020
14,793
4,594
258
Germany
Hi, right now I'm using two Debian 11 VMs running my two Pi-holes but I now would like to replace the VMs with LXCs and add gravitiy-sync so I wouldn't need anymore do all administrations tasks twice in both pi-hole webUIs. And I also would like to add unbound for recursive DNS resolution to each LXC like described here.
If I remember right I've seen threads here of people running Pi-hole in LXCs, so I guess that should work. But when looking at the gravity-sync 4.x documentation I see these lines:
  • When running Pi-hole inside of a container, only the official Pi-hole Docker image is supported.
  • For both standard and container based Pi-hole deployments, Gravity Sync will run directly on the host OS and not inside of the container image.
  • Containers must use bind mounts to present the local Pi-hole configuration directories to the container, not Docker volumes.
  • You can mix/match standard and container based deployments of Pi-hole, or use different underlying Linux distributions, in the same Gravity Sync deployment.
Does someone maybe still got it working with LXCs? Someone knows why that scipt wouldn`t be working using an LXC?
 
I'm runningf Pi-hole inside a LXC container based on Rocky 8.6. However I go not use the gravity-sync. I created the LXC and then installed pi-hole.
 
I now tried it and looks like it is working.

I now got two unprivileged Pi-hole LXCs that are redundant/HA that:
- sync the configs via gravity-sync
- got failover with a vitual IP using keepalived so only one Pi-hole is actively used at a time so logs/statistics are all on one Pi-hole
- got unbound for recursive DNS resolution so I don't need to trust some single upstream servers
- autoupdate apt packges, pi-hole and gravity-sync
- are monitored using graylog and zabbix (there is a template for monitoring DNS statistics using the pi-hole API)

And the best part is for that HA you don't need a expensive PVE HA cluster. That all could be done with just a single PVE node + a baremetal raspberry pi.

Someone interested in a tutorial?
 
Last edited:
Someone interested in a tutorial?
I definitely am.
I am re-creating my home lab from scratch and I'd like to have some fallback for the home network if something in PVE goes wrong, so that is exactly what I want: run pihole in PVE as the main DNS/DHCP server and fallback on a raspberry pi in case the first one is not accessible.
 
Me Too!
Interested in gravity-sync in a CT but the other stuff looks interesting also.

- sync the configs via gravity-sync
- got failover with a vitual IP using keepalived so only one Pi-hole is actively used at a time so logs/statistics are all on one Pi-hole

- got unbound for recursive DNS resolution so I don't need to trust some single upstream servers
- autoupdate apt packges, pi-hole and gravity-sync
- are monitored using graylog and zabbix (there is a template for monitoring DNS statistics using the pi-hole API)
 
I didn't have time to write a proper tutorial but I've set up two synced Pi-hole LXCs this week so I thought I might give a short quick-and-dirty tutorial as long as I remeber everything.

I'm using the "debian-12-standard_12.2-1_amd64.tar.zst" as the template on two PVE nodes version 8.1.4. Put one LXC on each PVE node so one Pi-hole is always working even if a node goes down. You don't need a PVE cluster or high availability for this. If you don't got two PVE nodes you could also use one PVE node + 1 Raspi but then you might need to customize some stuff.

The basic idea is like this:
You have your router that acts as your single client-facing DNS server and DHCP server. Clients in your network will use this router as the gateway and single DNS server. I use two OPNsense VMs on two different PVE nodes in HA configuration using virtual IPs. So there is always a router running even if a node goes down. This isn't part of this tutorial but you can get an idea by looking at this: https://www.thomas-krenn.com/en/wiki/OPNsense_HA_Cluster_configuration
The router's DNS server will resolve the hostnames of your DHCP clients and forward DNS requests to the Pi-hole LXCs. There are two Pi-hole LXCs on two different PVE nodes. These will be kept in sync using gravity-sync so you can edit the config of any of those Pi-holes and it will be synced to the other one as well. What won't be synced are statistics and logs. So it would be good if all clients would only use a single master pi-hole. And only if that master Pi-hole isn't available they will use the second backup Pi-hole. That way 99,9% of the time only a single Pi-hole is used that got nearly all of the logs/statistics. Also useful as not all OSs will prioritize a primary and secondary DNS server and will choose any of those two at random.
To accomplish this we make use of keepalived which allows us to define a master LXC and a backup LXC. Once the master LXC fails the backup LXC becomes the new master. When the old master becomes available again this reverts back to normal. Each LXC got a static IP you can use to access the LXC and its Pi-hole's webUI. But there is also a third IP, a Virtual IP (or short VIP), that both LXCs are sharing. This VIP will always point to the Pi-hole that is currently the master. The router's DNS server is only forwarding requests to this VIP.
The Pi-hole of the master LXC then receives these requests, filters them and forwards them again. Usually, you would point Pi-hole to some public DNS servers like 1.1.1.1, 8.8.8.8 and so on. I don't like that for privacy and anti-censorship reasons and I want to resolve domains on my own using recursive DNS lookups. For that, each LXC is also running an unbound DNS server that is only listening on localhost. Pi-hole then forwards DNS requests, that didn't get filtered, to this local unbound DNS server and this will do the recursive DNS lookups.

Here the steps:

1.) Create two LXC
Create two new unprivileged LXCs using the latest “Debian 12 Standard” template. I will call one "PiholeMaster" and the other one "PiholeBackup".
1 core, 256MiB RAM, 128MiB swap and 8GiB (4GiB should work too) root filesystem will be fine. Single NIC eth0 using an IP that is free, not part of your DHCP server IP range and part of router's subnet. I will refer to the IP of the "PiholeMaster" LXC as <MasterIP> and for the "PiholeBackup" LXC as <BackupIP>. Use the IP of your router as the gateway. For the DNS server, you might want to choose a public one (like "1.1.1.1") so your Pi-hole LXCs should be able to go online even if something is broken and DNS resolving by Pi-hole/Unbound isn't working. The virtual IP isn't defined in the PVE webUI and I will refer to it as <VirtualIP>. This virtual IP should also be a free IP of your router's subnet that isn't part of the DHCP range.

2.) Doing an initial upgrade
For both LXCs do:
  • connect via SSH
  • upgrade all packages: apt update && apt full-upgrade
The following steps are optional and not needed but I do them for each Debian 12 LXC I set up.

3.) Optional: Disable IPv6 and reduce swappiness
I'm not using IPv6 anywhere and therefore don't want the LXC to use it, so I'm disabling it. And I want to minimize swapping to reduce wear on the SSDs. For that run:
Code:
echo "# disable IPv6" > /etc/sysctl.d/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.d/sysctl.conf
echo "# only swap to prevent OOM" >> /etc/sysctl.d/sysctl.conf
echo "vm.swappiness = 0" >> /etc/sysctl.d/sysctl.conf

4.) Optional: Enable unattended upgrades
In my opinion it is a good idea to install security patches as soon as possible. It's not likely that these will break anything as feature upgrades aren't automatically installed. But make sure to always have recent automated backups in case something goes wrong. For that run:
apt-get update && apt-get install unattended-upgrades apt-listchanges

5.) Optional: Add non-free repos
The Debian LXC ships with repos that allow to install open source software only. To be able to install proprietary software add “non-free non-free-firmware” to all 3 default repos. For that run:
Code:
sed -i -E 's/deb http:\/\/deb.debian.org\/debian bookworm main contrib$/deb http:\/\/deb.debian.org\/debian bookworm main contrib non-free non-free-firmware/g' /etc/apt/sources.list
sed -i -E 's/deb http:\/\/deb.debian.org\/debian bookworm-updates main contrib$/deb http:\/\/deb.debian.org\/debian bookworm-updates main contrib non-free non-free-firmware/g' /etc/apt/sources.list
sed -i -E 's/deb http:\/\/security.debian.org bookworm-security main contrib$/deb http:\/\/security.debian.org bookworm-security main contrib non-free non-free-firmware/g' /etc/apt/sources.list

6.) Optional: Move /tmp to ramdisk
Move /tmp to ramdisk to reduce wear on SSDs:
Code:
cp /usr/share/systemd/tmp.mount /etc/systemd/system/
systemctl enable tmp.mount

7.) Optional: Set timezone
  • set timezone to Europe/Berlin: timedatectl set-timezone Europe/Berlin
  • if you want an UI for selecting your timezone run this instead: dpkg-reconfigure tzdata
8.) Optional: Setup locales
  • setup your countrys locales by running: dpkg-reconfigure locales
9.) Optional: Limit logging
I personally don't want to write logs to disk as I use a log collector that forwards them to centralized log server with a persistent database anyway. So I want journald to only store 20MB of logs in volatile RAM to reduce IO and SSD wear. But keep in mind that in case you do this, you will lose the journal once you shutdown the LXC. To do that:
Code:
mkdir /etc/systemd/journald.conf.d
echo "[Journal]" > /etc/systemd/journald.conf.d/journald.conf
echo "# only write logs to volatile RAM and limit it to 20MB" >> /etc/systemd/journald.conf.d/journald.conf
echo "Storage=volatile" >> /etc/systemd/journald.conf.d/journald.conf
echo "SystemMaxUse=20M" >> /etc/systemd/journald.conf.d/journald.conf
echo "RuntimeMaxUse=20M" >> /etc/systemd/journald.conf.d/journald.conf

10.) Installing Pi-hole
For both LXCs do:
  • install Pi-hole via install script:
    Code:
    apt update && apt install curl
    curl -sSL https://install.pi-hole.net | PIHOLE_SKIP_OS_CHECK=true bash
    (this PIHOLE_SKIP_OS_CHECK=true is needed when working with Pi-hole in an LXC as otherwise it is failing the OS checks as it doesn'T recognize a Debian 12 LXC as a normal Debian 12 distro)
  • choose “eth0” as the interface
  • select any upstream DNS provider (I've chosen DNS.Watch)
  • continue with preselected blocklists
  • install web admin interface
  • install webserver
  • choose to log queries with show everything (you can disable that later but useful for checking if everything works)
  • write down the webUI password at the bottom of the “Installation complete!” dialog!!!
  • open "http://<MasterIP>/admin" and "http://<BackupIP>/admin", login and check if everything is working

11.) Optional: Daily Pi-Hole Autoupdate
Do the following for both LXCs in case you want Pi-hole to be auto updated every day:
  • create systemd service for it: nano /etc/systemd/system/update_pihole.service
    • Add there:
      Code:
      [Unit]
      Description=Update Pi-hole
      After=local-fs.target remote-fs.target network-online.target
      StartLimitIntervalSec=0
      
      [Service]
      Type=oneshot
      User=root
      ExecStart=/bin/bash -c "PIHOLE_SKIP_OS_CHECK=true sudo -E pihole -up"
      
      [Install]
      WantedBy=multi-user.target
  • create timer: nano /etc/systemd/system/update_pihole.timer
    • Add there:
      Code:
      [Unit]
      Description=Daily update of Pi-hole
      After=local-fs.target remote-fs.target network-online.target
      
      [Timer]
      Unit=update_pihole.service
      OnCalendar=daily
      Persistent=true
      
      [Install]
      WantedBy=timers.target
  • enable and start timer:
    Code:
    systemctl enable update_pihole.timer
    systemctl start update_pihole.timer
  • reload services/timers: systemctl daemon-reload
 
Last edited:
12.) Installing Gravity-Sync
Do the following for both LXCs:

12.1) Create gravity user​

  • create user: adduser gravity
  • add user to sudo group: usermod -a -G sudo gravity
    (unluckily gravity sync can't be run without giving full system admin privileges…)

12.2) Install Gravity-sync​

  • switch to gravity user: su -l gravity
Do this for "PiholeMaster" LXC only:
  • run install script: curl -sSL https://raw.githubusercontent.com/vmstan/gs-install/main/gs-install.sh | bash
    • for “Remote Pi-hole host address” choose “<BackupIP>”
    • for “Remote Pi-hole host username” choose “gravity”
    • ignore the warning that the other host is unknown
    • type in the password of the gravity user of the "PiholeBackup" LXC
    • if connection fails, make sure you didn't forbid SSH logins via password authentification yet
  • push Pihole config from this to "PiholeBackup" LXC: gravity-sync push
Do this for "PiholeBackup" LXC only:
  • run install script: curl -sSL https://raw.githubusercontent.com/vmstan/gs-install/main/gs-install.sh | bash
    • for “Remote Pi-hole host address” choose “<MasterIP>”
    • for “Remote Pi-hole host username” choose “gravity”
    • ignore the warning that the other host is unknown
    • type in the password of the gravity user of the "PiholeMaster" LXC
    • if connection fails, make sure you didn't forbid SSH logins via password authentification yet
Do this for both LXCs:
  • tell gravity-sync to set up services for auto syncing of Pi-hole configs: gravity-sync auto
Edit:
Some people weren't asked for SSH credentials when setting up gravity-sync. In this case you might need to manually install the public keys on the other LXC first:
I solved this by running this command before running gravity-sync auto from "<BackupIP>" LXC:

ssh-copy-id -i /etc/gravity-sync/gravity-sync.rsa.pub “<MasterIP>”

and "<MasterIP>" LXC:

ssh-copy-id -i /etc/gravity-sync/gravity-sync.rsa.pub “<BackupIP>”

13.) Optional: Forbid SSH using password
For the installation of gravity-sync it was required to allow logging in via SSH using passwords. I don't want that for security reasons so this will be disabled using the following. But make sure to only do this if you set up a public private key pair for your LXC first:
  • create new config file:
    Code:
    echo "# only allow authentification via public private keys" > /etc/ssh/sshd_config.d/sshd_config.conf
    echo "PasswordAuthentication no" >> /etc/ssh/sshd_config.d/sshd_config.conf
  • restart sshd service as rebooting the LXC won't make the sshd to use the changed config:
    Code:
    systemctl disable ssh.socket
    systemctl enable ssh
    reboot
14.) Optional: Daily autoupdate of Gravity-sync
In case you want gravity-sync to be updated automatically each day:
  • create systemd service for it: nano /etc/systemd/system/update_gravity-sync.service
    • Add there:
      Code:
      [Unit]
      Description=Update Gravity-sync
      After=local-fs.target remote-fs.target network-online.target
      StartLimitIntervalSec=0
      
      [Service]
      Type=oneshot
      User=gravity
      ExecStart=/usr/local/bin/gravity-sync update
      
      [Install]
      WantedBy=multi-user.target
  • create timer: nano /etc/systemd/system/update_gravity-sync.timer
    • Add there:
      Code:
      [Unit]
      Description=Daily update of Gravity-sync
      After=local-fs.target remote-fs.target network-online.target
      
      [Timer]
      Unit=update_gravity-sync.service
      OnCalendar=daily
      Persistent=true
      
      [Install]
      WantedBy=timers.target
    • enable and start timer:
      Code:
      systemctl enable update_gravity-sync.timer
      systemctl start update_gravity-sync.timer
    • reload services/timers: systemctl daemon-reload

15.) Installing Keepalived for DNS failover
Gravity-sync only syncs configurations between the two Pihole LXCs but not the statistics/logs. So it would be great if only one of the two Piholes would be actively used (being the master) while the other one would be passive (being the backup) until the master pihole fails where the backup pihole then would become the new master. To accomplish this the keepalived service will be used. For that, we need a third IP address, a virtual IP (VIP), which we tell the router to use as the only DNS server. This VIP will then always point to the master pihole, no matter which of the two piholes will be the master.

15.1) Install keepalived​

For both LXCs do:
  • ssh in as root
  • install it: apt update && apt install keepalived

15.2) Configure keepalived​

Do this for the "PiholeMaster" LXC only:
  • edit config file: nano /etc/keepalived/keepalived.conf
    • add there:
      Code:
      ! Configuration File for keepalived
      
      vrrp_instance VI_1 {
          state MASTER
          interface eth0
          virtual_router_id 101
          priority 200
          advert_int 1
          authentication {
              auth_type AH
              auth_pass <ChooseAGoodPasswordUsedByBothLXCs>
          }
          virtual_ipaddress {
              <VirtualIP>
          }
      }
      Make sure "101" isn't used by other CARP/VRRP protocols like the HA of OPNsense and so on.
  • enable and start service:
    Code:
    systemctl enable keepalived
    systemctl start keepalived

Do this for the "PiholeBackup" LXC only:
  • edit config file: nano /etc/keepalived/keepalived.conf
    • add there:
      Code:
      ! Configuration File for keepalived
      
      vrrp_instance VI_1 {
          state BACKUP
          interface eth0
          virtual_router_id 101
          priority 100
          advert_int 1
          authentication {
              auth_type AH
              auth_pass <SamePasswordAsAbove>
          }
          virtual_ipaddress {
              <VirtualIP>
          }
      }

Do for both LXCs:
  • enable and start service:
    Code:
    systemctl enable keepalived
    systemctl start keepalived
  • check with “ip a” on both LXCs if eth0 got a second IP assigned. Only the master should have that VIP. Shutdown the master and see if the IP then moves to the other LXC.
16.) Installing Unbound for recursive DNS lookups
Use Unbound as a local DNS server that does a full recursive DNS lookup instead of using an uplink DNS server that might not be trusted or is filtering.

For both LXCs do:
  • install unbound: apt update && apt install unbound
  • create config file: nano /etc/unbound/unbound.conf.d/pi-hole.conf
    • add there:
      Code:
      server:
          # If no logfile is specified, syslog is used
          #logfile: "/var/log/unbound/unbound.log"
          verbosity: 0
      
          interface: 127.0.0.1
          port: 5335
          do-ip4: yes
          do-udp: yes
          do-tcp: yes
      
          # May be set to yes if you have IPv6 connectivity
          do-ip6: no
      
          # You want to leave this to no unless you have *native* IPv6. With 6to4 and
          # Terredo tunnels your web browser should favor IPv4 for the same reasons
          prefer-ip6: no
      
          # Use this only when you downloaded the list of primary root servers!
          # If you use the default dns-root-data package, unbound will find it automatically
          #root-hints: "/var/lib/unbound/root.hints"
      
          # Trust glue only if it is within the server's authority
          harden-glue: yes
      
          # Require DNSSEC data for trust-anchored zones, if such data is absent, the zone becomes BOGUS
          harden-dnssec-stripped: yes
      
          # Don't use Capitalization randomization as it known to cause DNSSEC issues sometimes
          # see https://discourse.pi-hole.net/t/unbound-stubby-or-dnscrypt-proxy/9378 for further details
          use-caps-for-id: no
      
          # Reduce EDNS reassembly buffer size.
          # IP fragmentation is unreliable on the Internet today, and can cause
          # transmission failures when large DNS messages are sent via UDP. Even
          # when fragmentation does work, it may not be secure; it is theoretically
          # possible to spoof parts of a fragmented DNS message, without easy
          # detection at the receiving end. Recently, there was an excellent study
          # >>> Defragmenting DNS - Determining the optimal maximum UDP response size for DNS <<<
          # by Axel Koolhaas, and Tjeerd Slokker (https://indico.dns-oarc.net/event/36/contributions/776/)
          # in collaboration with NLnet Labs explored DNS using real world data from the
          # the RIPE Atlas probes and the researchers suggested different values for
          # IPv4 and IPv6 and in different scenarios. They advise that servers should
          # be configured to limit DNS messages sent over UDP to a size that will not
          # trigger fragmentation on typical network links. DNS servers can switch
          # from UDP to TCP when a DNS response is too big to fit in this limited
          # buffer size. This value has also been suggested in DNS Flag Day 2020.
          edns-buffer-size: 1232
      
          # Perform prefetching of close to expired message cache entries
          # This only applies to domains that have been frequently queried
          prefetch: yes
      
          # One thread should be sufficient, can be increased on beefy machines. In reality for most users running on small networks or on a single machine, it should be unnecessary to seek performance enhancement by increasing num-threads above 1.
          num-threads: 1
      
          # Ensure kernel buffer is large enough to not lose messages in traffic spikes
          so-rcvbuf: 1m
      
          # Ensure privacy of local IP ranges
          private-address: 192.168.0.0/16
          private-address: 169.254.0.0/16
          private-address: 172.16.0.0/12
          private-address: 10.0.0.0/8
          private-address: fd00::/8
          private-address: fe80::/10
  • restart unbound: systemctl restart unbound
  • test that unbound is working: dig pi-hole.net @127.0.0.1 -p 5335
  • tell pihole about the edns limit: nano /etc/dnsmasq.d/99-edns.conf
    • add there:
      Code:
      edns-packet-max=1232
  • check if DNSSEC works buy runnig these:
    Code:
    dig fail01.dnssec.works @127.0.0.1 -p 5335
    dig dnssec.works @127.0.0.1 -p 5335
    The first command should return a “SERVFAIL” and no IP address. The second “NOERROR” and an IP address.
  • open Pihole webUI (at "http://<MasterIP>/admin" or "http://<BackupIP>/admin" depending on which LXC you are working on)
    • at “Settings → DNS”
      • remove existing upstream DNS servers and add “127.0.0.1#5335” as “Custom 1(IPv4)” upstream DNS server
      • select “Respond only on interface eth0”
      • check “Use DNSSEC”

Now log into your routers webUI and tell it to forward DNS requests to <VirtualIP> and see in the Pi-hole webUI at "http://<VirtualIP>/admin" if Pi-hole logs them.
DONE



Feel free to ask in case something isn't clear.
 
Last edited:
  • Like
Reactions: UdoB
Thanks for such an elaborate tutorial.
I hope everybody appreciates the time you spent on documenting your experience, especially since you've already had it working for ages.
I think its going to be used a lot - Maybe edit/mark the post title tag [TUTORIAL]

THANKS AGAIN
 
Thanks for such an elaborate tutorial.
I hope everybody appreciates the time you spent on documenting your experience, especially since you've already had it working for ages.
I think its going to be used a lot - Maybe edit/mark the post title tag [TUTORIAL]

THANKS AGAIN
I usually document everything I do in my DokuWiki LXC for private use as I otherwise can't remeber it in case I need to change/redo something after years. So most work was done already. Just needed to add some additional explanations and do the BBCode formating.

The old Debian 11 Pi-hole LXCs still work great for years but instead of upgrading it to Debian 12 I thought it might be a good idea to redo it from scratch and while doing it improving some stuff (systemd services/timers instead of cronjobs etc).
So I used my old notes as a base to set it up again.
The new set up is only running for a few days but so far everything looks fine.
So following the command should result in a fully working Pi-Hole HA setup.

Unfortunately I'm not able to edit my initial post, as it is way too old because I needed way too long to write the tutorial. So I can't link to the following tutorial post or set the thread to "[Tutorial]" anymore. Would be nice, in case one of the staff reads this, if the thread could be set to "tutorial" and with a link to post #10 as the first line with something like "Read post #10 and #11 for a tutorial: https://forum.proxmox.com/threads/pi-hole-lxc-with-gravity-sync.109881/post-645646".
 
Last edited:
By the way...you could also use the Pi-hole LXCs to do other stuff that should be redundant without running a PVE cluster. I for example created a systemd timer that will trigger a systemd service every 5 minutes. This service will then run a bash script that will check if the LXC is currenty the master. If it is not, it will exit. If it is, it could run some other progams or scripts. I use this to update my DynDNS so that this is redundant too and I could VPN into my LAN even if a node fails.
Maybe I will also add my certbot there.

With Debain 11 I was using free spdyn subdomains (they don't require logging in every month for the subdomains to not be suspended) for DynDNS and letsencrypt. Now I use my own "de" domain with wildcard certificate/subdomains for DynDNS/letsencrypt (13ct/month if you wait for a good deal at netcup and there are scripts to use their API for DNS-01 challanges and DynDNS IP updates).

Example for both LXCs:

Create IP update script with failover​

  • create script: nano /root/update_ip_if_master.sh
    • add there:
      Code:
      #!/bin/bash
      HAIFACE="eth0"
      HAIP="<VirtualIP>"
      
      hacheck=$(ip address show dev "${HAIFACE}" | grep "${HAIP}")
      if [ -z "$hacheck" ]; then
          exit 0
      else
          //add your DynDNS upgrade tool/script here
      fi
      Probably not the best way to check if the LXC is master or not, but I didn't find a way to directly poll the state of keepalived and it is working fine for over a year.
  • make it executable: chmod 750 /root/update_ip_if_master.sh
  • create a service for it: nano /etc/systemd/system/update_ip_if_master.service
    • add there:
    • Code:
      [Unit]
      Description=Update IP if LXC is master
      After=local-fs.target remote-fs.target network-online.target
      StartLimitIntervalSec=0
      
      [Service]
      Type=oneshot
      User=root
      ExecStart=/bin/bash /root/update_ip_if_master.sh
      
      [Install]
      WantedBy=multi-user.target
  • create a timer for service: nano /etc/systemd/system/update_ip_if_master.timer
    • add there:
      Code:
      [Unit]
      Description=Update DynDNS IP every 5 minutes
      After=local-fs.target remote-fs.target network-online.target
      
      [Timer]
      Unit=update_ip_if_master.service
      OnCalendar=*:0/5
      Persistent=true
      
      [Install]
      WantedBy=timers.target
  • enable and start timer:
    Code:
    systemctl enable update_ip_if_master.timer
    systemctl start update_ip_if_master.timer
  • reload services/timers: systemctl daemon-reload
 
Last edited:
DokuWiki LXC
Never used DokuWiki. I guess you chose it because of no need for database. I must try it sometime in a Proxmox LXC; on top of Alpine? I love LXCs based on Alpine: Fast & Tiny! On top of what did you make your LXC?
 
Here, basically everything is Debian, if possible. I'm using that for 23 years, I like it and it is rock solid. Not as stripped down and efficient as an Alpine, but at least my KSM is benefiting from the Debian monocultue. ;)
Last time I tested Alpine it wasn't that convenient to use. But I guess that was more a case of "You can't teach an old dog new tricks".

And yes, that it isn't using a DB was the main reason. And even better, everything is stored as text files with a markup language that is still pretty well readable when opened in any shell or text editor. Would be pretty bad to document there everything I do regarding my homelab if then anything breaks the whole homelab and I couldn't run that LXC anymore to read about how to get stuff working again. This way I could always easily restore the text files from a backup and I would still be able to get all the information I need to know.
 
This way I could always easily restore the text files from a backup and I would still be able to get all the information I need to know.
Definitely a must, I totally agree. My documents are sprawled all over the place - keep coming across them when I don't need them, but never when I do!

As far as LXC architecture on Proxmox - yes I agree for best compatibility go Debian then Ubuntu. But you can't compete with the Alpine neatness.

Example: I have setup an Alpine LXC for Tailscale (access + subnet router) whose whole total Backup Size on PVE is 23.5mb.
Second example: I have an Alpine LXC with desktop & all, number of users, remoting etc.; Backup Size: 230mb

Look, I just looked at a Turnkey DokuWiki installation iso of 417mb. And that's the installation iso only! Have never tried Turnkey, what's your experience?
 
Look, I just looked at a Turnkey DokuWiki installation iso of 417mb. And that's the installation iso only! Have never tried Turnkey, what's your experience?
Not a fan of them. I personally want to understand and to be in control of everything that I'm selfhosting. So for me, that means setting up everything from scratch. But I get why people like these turnkey-solutions like turnkey LXCs or docker containers. Saves a lot of time you don't have to spend on actually learning, unterstanding or maintaining things.
 
Last edited:
Hey thanks for the detailed installation guide, it helped me a lot!

I am running into one issue and that is it keeps asking for a password for the user gravity.
I looked around and quite some people have issues like this, but I have no idea what the exact config settings are that are required.

I tried debian 11 and 12, but seem to have the same issue, I tried diagnosing with ssh -vvv, but it seems it doesn't pickup the keys.

12.) Installing Gravity-Sync
Do the following for both LXCs:

12.1) Create gravity user​

  • create user: adduser gravity
  • add user to sudo group: usermod -a -G sudo gravity
    (unluckily gravity sync can't be run without giving full system admin privileges…)

12.2) Install Gravity-sync​

  • switch to gravity user: su -l gravity
Do this for "PiholeMaster" LXC only:
  • run install script: curl -sSL https://raw.githubusercontent.com/vmstan/gs-install/main/gs-install.sh | bash
    • for “Remote Pi-hole host address” choose “<BackupIP>”
    • for “Remote Pi-hole host username” choose “gravity”
    • ignore the warning that the other host is unknown
    • type in the password of the gravity user of the "PiholeBackup" LXC
    • if connection fails, make sure you didn't forbid SSH logins via password authentification yet
  • push Pihole config from this to "PiholeBackup" LXC: gravity-sync push
Do this for "PiholeBackup" LXC only:
  • run install script: curl -sSL https://raw.githubusercontent.com/vmstan/gs-install/main/gs-install.sh | bash
    • for “Remote Pi-hole host address” choose “<MasterIP>”
    • for “Remote Pi-hole host username” choose “gravity”
    • ignore the warning that the other host is unknown
    • type in the password of the gravity user of the "PiholeMaster" LXC
    • if connection fails, make sure you didn't forbid SSH logins via password authentification yet
Do this for both LXCs:
  • tell gravity-sync to set up services for auto syncing of Pi-hole configs: gravity-sync auto
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!