No network in wireguard container

Andreas S.

Member
Sep 30, 2020
8
3
8
47
Hello Proxmoxers,

I want to set up a WireGuard VPN from my home office to our small office. But before I can even start, I have to get networking up. Currently I can't even get my way out of the stock WireGuard container.

On the host in my home office there are 3 VMs and the newly created WireGuard container:
  • Node 100 - Ubuntu 18.04 - Should have been a secondary Samba4 AC DC controller. But not used and always off
  • Node 101 - Ubuntu 18.04 - Had been planned to tinker around with Kubernetes, but is now a Docker caching proxy and PiPy caching proxy.
  • Node 104 - HassOs - Home Assistant testbed. Basically a Debian with Docker container AFAIK.
  • Node 121 - WireGuard - network not working
With all the containers I don't have any network problems at all.

I basically followed the tutorial on creating the container To install a debian-10-turnkey-wireguard-16.1-1amg64.tar.gz container template.

From the container I can't reach any network so far:
Code:
root@node-101:~# pct enter 121
root@wireguard ~# less /etc/network/interfaces
# UNCONFIGURED INTERFACES
# remove the above line if you edit this file

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address 192.168.1.10/24
# --- BEGIN PVE ---
        post-up ip route add 10.116.0.1 dev eth0
        post-up ip route add default via 10.116.0.1 dev eth0
        pre-down ip route del default via 10.116.0.1 dev eth0
        pre-down ip route del 10.116.0.1 dev eth0
# --- END PVE ---

allow-hotplug eth1
iface eth1 inet dhcp

# try to ping the gateway
root@wireguard ~# ping 10.116.0.1
PING 10.116.0.1 (10.116.0.1) 56(84) bytes of data.
^C
--- 10.116.0.1 ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 52ms

# try to ping the host
root@wireguard ~# ping 10.116.0.10
PING 10.116.0.10 (10.116.0.10) 56(84) bytes of data.
^C
--- 10.116.0.10 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 13ms

root@wireguard ~# ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 6e:d3:35:e5:14:dd brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.1.10/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::6cd3:35ff:fee5:14dd/64 scope link
       valid_lft forever preferred_lft forever

some general host information:

Code:
root@node-101:~# pct config 121
arch: amd64
cores: 2
hostname: wireguard
memory: 512
net0: name=eth0,bridge=vmbr0,gw=10.116.0.1,hwaddr=6E:D3:35:E5:14:DD,ip=192.168.1.10/24,type=veth
ostype: debian
rootfs: local-lvm:vm-121-disk-0,size=8G
swap: 512
unprivileged: 1

root@node-101:~# ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp6s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether 00:24:1d:84:5b:1a brd ff:ff:ff:ff:ff:ff
3: enp7s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:08:54:d1:63:d0 brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:24:1d:84:5b:1a brd ff:ff:ff:ff:ff:ff
    inet 10.116.0.10/24 brd 10.116.0.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::224:1dff:fe84:5b1a/64 scope link
       valid_lft forever preferred_lft forever
5: tap101i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr101i0 state UNKNOWN group default qlen 1000
    link/ether ba:2b:29:9f:6f:1c brd ff:ff:ff:ff:ff:ff
6: fwbr101i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 4e:d8:fb:32:73:a7 brd ff:ff:ff:ff:ff:ff
7: fwpr101p0@fwln101i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether e6:28:2c:01:69:6c brd ff:ff:ff:ff:ff:ff
8: fwln101i0@fwpr101p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr101i0 state UP group default qlen 1000
    link/ether 4e:d8:fb:32:73:a7 brd ff:ff:ff:ff:ff:ff
13: veth121i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:88:5c:b4:03:24 brd ff:ff:ff:ff:ff:ff link-netnsid 0
14: tap104i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether 8e:0d:cf:04:bc:77 brd ff:ff:ff:ff:ff:ff

root@node-101:~# pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.114-1-pve)
pve-manager: 6.4-6 (running version: 6.4-6/be2fa32c)
pve-kernel-5.4: 6.4-2
pve-kernel-helper: 6.4-2
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.4-1
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-2
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.6-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.5-5
pve-cluster: 6.4-1
pve-container: 3.3-5
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-3
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1

Any idea where I could look further to get it up and running?
Thanks for any input.

Best,
Andreas
 
hi,

you might have to enable nesting option (to get networking up, since systemd creates issues here sometimes) and pass /dev/net/tun to the container via bind mount (for the VPN interface inside the container)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!