[SOLVED] Wireguard and DNS within LXC Container

dddsssaaa

New Member
Feb 4, 2026
2
0
1
Hey everyone,

I've been trying to figure this out for the better part of the day. I've got a commercial VPN subscription and I want to have my containers send their traffic through wireguard which I've set up on the host. I get an error when running wg-quick up wg0:
/etc/resolvconf/update.d/libc: Warning: /etc/resolv.conf is not a symbolic link to /run/resolvconf/resolv.conf
Truth be told, I don't know how to fix the issue so I just manually set the DNS in the Proxmox web UI to match that of the config file I got from the commercial VPN:
[Interface]
PrivateKey = <private key>
Address = <address>
DNS = <dns address>

[Peer]
PublicKey = <public key>
AllowedIPs = 0.0.0.0/0, ::/0
Endpoint = <endpoint>
If I do it that way, it works fine: curl ifconfig.io/all shows that I have the right IP, and dig +trace google.com shows that the DNS specified in the above file is used by the host. The problem I get is when I try to pass this VPN to the container. I have internet access (as I can ping 8.8.8.8) but the DNS fails to resolve anything (I can't ping google.com). If I manually set an external DNS through the Proxmox web UI, then it shows that the host's VPN address is being used by the container and I can access the internet as usual (pinging 8.8.8.8 and google.com both work). Trying to manually set the DNS of the container to match that of the host doesn't work (either through manually typing it or leaving the field blank) and running the wireguard binary within the container doesn't work either.

I added the following lines to the container's configuration:
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net dev/net none bind,create=dir
and ran chown 100000:100000 /dev/net/tun as described on the wiki. I tried to follow a couple more threads but to no avail. Adding net.ipv4.ip_forward=1 to the guest's /etc/sysctl.conf also had no effect.

Any idea what the problem could be here?
Thanks
 
Last edited:
It seems like you are following tutorials setting up wireguard in the container itself, but set up wireguard on the host. Is there any requirement for the setup to be on the host? Otherwise I would deploy it in the container.

Regarding the warning on the host, it pretty much tells you what is wrong here. wg-quick manages dns resolvers via resolvconf, which you installed, but to manage this /etc/resolv.conf must link to the file managed by resolvconf. Before you make any changes make a backup of /etc/resolv.conf in case stuff goes haywire: cp /etc/resolv.conf{,.bak}. To create the link you will have to run ln -s /run/resolvconf/resolv.conf /etc/resolv.conf.

To setup the wireguard connection on the host and use it in the container you can create a bridge and NAT it to the wireguard interface, then use that bridge for your container:

* Create a bridge network via the PVE GUI and assign a private ip/cidr, something like 10.0.0.1/24, don't fill in the gateway option. Apply the configuration.
* Enable NAT for this interface by editing /etc/network/interfaces:

Code:
auto lo
iface lo inet loopback

iface nic0 inet manual

auto vmbr0
iface vmbr0 inet static
    address 192.168.0.150/24
    gateway 192.168.0.1
    bridge-ports nic0
    bridge-stp off
    bridge-fd 0

auto vmbr1
iface vmbr1 inet static
    address 10.0.0.1/24
    bridge-ports none
    bridge-stp off
    bridge-fd 0

post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.0.0.0/24' -o wg0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.0.0.0/24' -o wg0 -j MASQUERADE
source /etc/network/interfaces.d/*

* Set up a static ip address for the container using the newly created bridge, in my case vmbr1, use something like 10.0.0.10/24.
 
Last edited:
You're right, it might be simpler to just set up a wireguard in each container that needs it. I thought the overhead would be smaller if I set up a general connection on the host, but long term it would be more annoying to maintain. Thank you for the thorough explanation!
 
I added the following lines to the container's configuration:

lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net dev/net none bind,create=dir

and ran chown 100000:100000 /dev/net/tun as described on the wiki. I tried to follow a couple more threads but to no avail. Adding net.ipv4.ip_forward=1 to the guest's /etc/sysctl.conf also had no effect.

BTW- all that business is outdated I believe since, what ? 8.something ? PVE kernel has wg built-in so no need to install stuff in the host (sounds like you've got that going, but if you did install wireguard-dkms on the PVE host, maybe that's messing with your config ?)

To deploy for example a wireguard endpoint like ghcr.io/wg-easy/wg-easy nested in an LXC (for example),

-Ensure latest PVE 8x or later kernel.
-Deploy basic unprivileged deb13 LXC template
-Enable nesting, mknod, and keyctl for the LXC
-Add this to the end of the. compose file:
JSON:
    sysctls:
      - net.ipv4.ip_forward=1
      - net.ipv4.conf.all.src_valid_mark=1
      - net.ipv6.conf.all.disable_ipv6=0
      - net.ipv6.conf.all.forwarding=1
      - net.ipv6.conf.default.forwarding=1
(can ditch the ipv6 stuff if you have 6 disabled).
-Then fire it up.

For what you're trying to do (which honestly, is still a little unclear… sounds like you're trying to setup a wireguard tunnel on the PVE host, which sounds insane to me, but whatever floats your boat. You can of course install wireguard-tools in every LXC and have each setup a tunnel, but seems only slightly less insane.

If it were me, I'd just create the LXC like above, load gluetun docker image on it, configure your VPN service in glutun, then configure standard routes on the PVE host to funnel your LXCs traffic over it.

Now that I think of it, this sounds like an ideal candidate to use the new SDN features of PVE so that all your (must be connected via VPN) CTs can be sitting on their own private CIDR routed out the SDN through the gluetun LXC, then just use the SDN features to "switch" the entire network from direct-internet, to vpn-internet at will. One nice thing about it is no mucking with the host routes, which can be a source of major "oopsies" sometimes, so not good to do on a production server that isn't scheduled for "down maintenace".

Anyway, just wanted to chime in on the outdated wireguard cruft (which may be causing issues). There's a lot of PVE knowledge amassed by us all over the years, and if you use any AI search assistant in your favorite browser, you're gonna get a lot of hallucinations from chatbots living in 2013. Protip is to narrow search results by date. I usually start with "month", then "year", then accept the old info is prob still current.

Oh, almost forgot, your VPN provider stipulated the DNS server, but do they block DNS otherwise ? You can often just use 1.1.1.1 and be likely better off. Regardless, the peer setup conf dictates the DNS on the tunnel for the connecting peer (in my suggestion, that would be the glutun CT). But that doesn't mean any other VMs/LXCs need to use that DNS server. For my money (and security) I'd load a technitium container right alongside the gluetun container (or two of them, if you're paranoid about dns uptime), and configure it to do local recursive lookup for all your hosts, which bypasses your VPN provider entirely with DNS-over-TLS directly to cloudflares servers. Plus you get the speed of consolidated local caching.

Good luck with whatever it is you're trying to do !

-=dave
 
Last edited: