[SOLVED] OpenVPN client issue in unprivileged container

vincent2

Member
Jan 4, 2019
9
2
21
32
Hi,

I have a Proxmox 5.3-6 running an unprivileged LXC container with Ubuntu 18.04, fully upgraded, running OpenVPN 2.4.4. It'd like to initiate an OpenVPN connection from this container, however, it's not fully working.

I've followed the following steps to make tun0 available in the unprivileged container, which appear to work, since tun0 is available in my container: forum [dot] proxmox [dot] com/threads/openvpn-in-unprivileged-container.38670/#post-222147

I can (sort of) initiate a VPN connection, but after the last line (route 0.0.0.0/1 via 10.24.56.1):

Code:
Wed Jan  2 08:02:36 2019 TUN/TAP device tun0 opened
Wed Jan  2 08:02:36 2019 Note: Cannot set tx queue length on tun0: Operation not permitted (errno=1)
Wed Jan  2 08:02:36 2019 do_ifconfig, tt->did_ifconfig_ipv6_setup=0
Wed Jan  2 08:02:36 2019 /sbin/ip link set dev tun0 up mtu 1500
Wed Jan  2 08:02:36 2019 /sbin/ip addr add dev tun0 10.24.56.13/24 broadcast 10.24.56.255
Wed Jan  2 08:02:41 2019 /sbin/ip route add 213.152.162.68/32 via 10.0.42.1
Wed Jan  2 08:02:41 2019 /sbin/ip route add 0.0.0.0/1 via 10.24.56.1

my SSH connection drops and I cannot SSH into the container anymore. I can see that AirVPN (website) has received a incoming client, so the connection itself appears to be successful. Connecting via the Proxmox console, I can see:

Code:
root@tm:~# ip route
0.0.0.0/1 via 10.24.56.1 dev tun0
default via 10.0.42.1 dev eth0 proto static
10.0.42.1 dev eth0 proto static scope link
10.24.56.0/24 dev tun0 proto kernel scope link src 10.24.56.13
128.0.0.0/1 via 10.24.56.1 dev tun0
213.152.162.68 via 10.0.42.1 dev eth0


root@tm:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
   link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
   inet 127.0.0.1/8 scope host lo
      valid_lft forever preferred_lft forever
   inet6 ::1/128 scope host
      valid_lft forever preferred_lft forever
4: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 500
   link/none
   inet 10.24.56.13/24 brd 10.24.56.255 scope global tun0
      valid_lft forever preferred_lft forever
   inet6 fe80::1067:6fe8:fd60:ee86/64 scope link stable-privacy
      valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
   link/ether 8a:e4:3d:25:9b:a7 brd ff:ff:ff:ff:ff:ff link-netnsid 0
   inet 10.0.42.46/32 scope global eth0
      valid_lft forever preferred_lft forever
   inet6 fe80::88e4:3dff:fe25:9ba7/64 scope link
      valid_lft forever preferred_lft forever

root@tm:~# curl ipinfo [dot] io
curl: (6) Could not resolve host: ipinfo [dot] io

root@tm:~# ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=61 time=13.9 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=61 time=13.0 ms
^C
--- 1.1.1.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 13.055/13.482/13.909/0.427 ms

When I add these routes to route-up command my ovpn.conf
Code:
#!/bin/sh

echo "Adding default route to $route_vpn_gateway with /0 mask..."
ip route add default via $route_vpn_gateway

echo "Removing /1 routes..."
ip route del 0.0.0.0/1 via $route_vpn_gateway
ip route del 128.0.0.0/1 via $route_vpn_gateway
the SSH connection remains, but my traffic is not routed over the VPN connection.

Also, my /etc/resolv.conf still points to my local DNS (on my local network), so that is also probably why the curl doesn't work.

I have also tried this: hungred [dot] com/how-to/setup-openvpn-on-proxmox-lxc/
But adding the
Code:
lxc.hook.autodev = sh -c "modprobe tun; cd ${LXC_ROOTFS_MOUNT}/dev; mkdir net; mknod net/tun c 10 200; chmod 0666 net/tun"
line results in my container not being able to start:
Code:
-- Unit pve-container@104.service has begun starting up.
Jan 04 08:56:11 pve kernel: EXT4-fs (dm-10): mounted filesystem with ordered data mode. Opts: (null)
Jan 04 08:56:11 pve audit[57077]: AVC apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-104_</var/lib/lxc>" pid=57077 comm="apparmor_parser"
Jan 04 08:56:11 pve kernel: kauditd_printk_skb: 7 callbacks suppressed
Jan 04 08:56:11 pve kernel: audit: type=1400 audit(1546588571.870:48): apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-104_</var/lib/lxc>" pid=57077 comm="apparmor_
Jan 04 08:56:11 pve kernel: IPv6: ADDRCONF(NETDEV_UP): veth104i0: link is not ready
Jan 04 08:56:11 pve systemd-udevd[57078]: Could not generate persistent MAC address for vethK1R386: No such file or directory
Jan 04 08:56:12 pve kernel: vmbr0: port 6(veth104i0) entered blocking state
Jan 04 08:56:12 pve kernel: vmbr0: port 6(veth104i0) entered disabled state
Jan 04 08:56:12 pve kernel: device veth104i0 entered promiscuous mode
Jan 04 08:56:12 pve kernel: eth0: renamed from vethK1R386
Jan 04 08:56:12 pve kernel: vmbr0: port 6(veth104i0) entered disabled state
Jan 04 08:56:12 pve kernel: device veth104i0 left promiscuous mode
Jan 04 08:56:12 pve kernel: vmbr0: port 6(veth104i0) entered disabled state
Jan 04 08:56:12 pve systemd[1]: pve-container@104.service: Control process exited, code=exited status=1
Jan 04 08:56:12 pve systemd[1]: pve-container@104.service: Killing process 57065 (lxc-start) with signal SIGKILL.
Jan 04 08:56:12 pve systemd[1]: pve-container@104.service: Killing process 57203 (apparmor_parser) with signal SIGKILL.
Jan 04 08:56:12 pve pvestatd[2029]: unable to get PID for CT 104 (not running?)
Jan 04 08:56:12 pve systemd[1]: Failed to start PVE LXC Container: 104.
-- Subject: Unit pve-container@104.service has failed
-- Defined-By: systemd
-- Support:
--
-- Unit pve-container@104.service has failed.
--
-- The result is failed.
Jan 04 08:56:12 pve systemd[1]: pve-container@104.service: Unit entered failed state.
Jan 04 08:56:12 pve systemd[1]: pve-container@104.service: Failed with result 'exit-code'.
Jan 04 08:56:12 pve pvestatd[2029]: modified cpu set for lxc/100: 3-4,6-7
Jan 04 08:56:12 pve pct[57059]: command 'systemctl start pve-container@104' failed: exit code 1
Jan 04 08:56:12 pve pct[57056]: <root@pam> end task UPID:pve:0000DEE3:000A6E1B:5C2F119B:vzstart:104:root@pam: command 'systemctl start pve-container@104' failed: exit code 1

Doing so withing the container itself gives the following errors:
Code:
root@tm2:~# modprobe tun; cd /dev; mkdir net; mknod net/tun c 10 200; chmod 0666 net/tun
modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.15.18-9-pve/modules.dep.bin'
modprobe: FATAL: Module tun not found in directory /lib/modules/4.15.18-9-pve
mknod: net/tun: Operation not permitted
chmod: cannot access 'net/tun': No such file or directory
root@tm2:/dev#
root@tm2:/dev# nano /etc/rc.local
root@tm2:/dev# chmod +x /etc/rc.local
root@tm2:/dev# cd /etc/
root@tm2:/etc# ./rc.local
/bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
mknod: /dev/net/tun: Operation not permitted

I think I just need to add a route to ensure that local connections to my internal container IP are not routed through the VPN. If I do the exact same configuration on the Proxmox host itself, the VPN works just fine. I've also tried other OpenVPN configurations to another OpenVPN server/provider, but this results in the same issues.

Any help is greatly appreciated! :)
 
root@tm:~# ip route
0.0.0.0/1 via 10.24.56.1 dev tun0
default via 10.0.42.1 dev eth0 proto static
10.0.42.1 dev eth0 proto static scope link
10.24.56.0/24 dev tun0 proto kernel scope link src 10.24.56.13
128.0.0.0/1 via 10.24.56.1 dev tun0
213.152.162.68 via 10.0.42.1 dev eth0

You seem to have configured eth0's address with /32 as netmask (ip a confirms this) - could you please post your `/etc/network/interfaces` from within the container and the container config?
this means that every single packet received by your container will go through the VPN.

Depending on where you connect to the container from you could set a more specific route for this network.

* From which IP would you like to connect to the container?
* Any particular reason not to use an actual network (e.g. 10.0.42.13/24 ) instead of a single ip/ /32?
 
Wow, never thought that would be the issue. Thanks so much Stoiko! I changed it to /24 and I can now initiate the VPN connection and it is functional.
The only issue that still remains is that my DNS servers are not changed to use the ones the VPN server pushes.

When connected to the VPN:
Code:
root@tm:/etc/openvpn# cat /etc/resolv.conf
# --- BEGIN PVE ---
search somthing.isp.tld
nameserver 10.0.42.40 (<======== my internal DNS)
# --- END PVE ---
root@tm:/etc/openvpn# nslookup ipinfo.io
Server:       10.0.42.40
Address:   10.0.42.40#53

Non-authoritative answer:
Name:   ipinfo.io
Address: 216.239.38.21
Name:   ipinfo.io
Address: 216.239.32.21
Name:   ipinfo.io
Address: 216.239.36.21
Name:   ipinfo.io
Address: 216.239.34.21

I also notice the following "inactivity time-outs" in the VPN log after the connection has been succesfully initiated:

Code:
Fri Jan  4 14:25:39 2019 [server] Inactivity timeout (--ping-restart), restarting. [<============================]
Fri Jan  4 14:25:39 2019 SIGUSR1[soft,ping-restart] received, process restarting  
Fri Jan  4 14:25:39 2019 Restart pause, 5 second(s)
Fri Jan  4 14:25:44 2019 TCP/UDP: Preserving recently used remote address: [AF_INET]213.152.xxx.xx:443
Fri Jan  4 14:25:44 2019 Socket Buffers: R=[212992->212992] S=[212992->212992]
Fri Jan  4 14:25:44 2019 UDP link local: (not bound)
Fri Jan  4 14:25:44 2019 UDP link remote: [AF_INET]213.152.xxx.xx:443
Fri Jan  4 14:25:44 2019 TLS: Initial packet from [AF_INET]213.152.xxx.xx, sid=ef36b1b4 e254c708
Fri Jan  4 14:25:44 2019 VERIFY OK: depth=1, 
Fri Jan  4 14:25:44 2019 VERIFY KU OK
Fri Jan  4 14:25:44 2019 Validating certificate extended key usage
Fri Jan  4 14:25:44 2019 ++ Certificate has EKU (str) TLS Web Server Authentication, expects TLS Web Server Authentication
Fri Jan  4 14:25:44 2019 VERIFY EKU OK
Fri Jan  4 14:25:44 2019 VERIFY OK: depth=0, 
Fri Jan  4 14:25:44 2019 Control Channel: TLSv1.2, cipher TLSv1.2 DHE-RSA-AES256-GCM-SHA384, 4096 bit RSA
Fri Jan  4 14:25:44 2019 [server] Peer Connection Initiated with [AF_INET]213.152.xxx.xx:443
Fri Jan  4 14:25:46 2019 SENT CONTROL [server]: 'PUSH_REQUEST' (status=1)
Fri Jan  4 14:25:46 2019 PUSH: Received control message: 'PUSH_REPLY,comp-lzo no,redirect-gateway  def1 bypass-dhcp,dhcp-option DNS 10.24.56.1,route-gateway 10.24.56.1,topology subnet,ping 10,ping-restart 60,ifconfig 10.24.56.13 255.255.255.0,peer-id 14,cipher AES-256-GCM'
Fri Jan  4 14:25:46 2019 OPTIONS IMPORT: timers and/or timeouts modified
Fri Jan  4 14:25:46 2019 OPTIONS IMPORT: compression parms modified
Fri Jan  4 14:25:46 2019 OPTIONS IMPORT: --ifconfig/up options modified
Fri Jan  4 14:25:46 2019 OPTIONS IMPORT: route options modified
Fri Jan  4 14:25:46 2019 OPTIONS IMPORT: route-related options modified
Fri Jan  4 14:25:46 2019 OPTIONS IMPORT: --ip-win32 and/or --dhcp-option options modified
Fri Jan  4 14:25:46 2019 OPTIONS IMPORT: peer-id set
Fri Jan  4 14:25:46 2019 OPTIONS IMPORT: adjusting link_mtu to 1625
Fri Jan  4 14:25:46 2019 OPTIONS IMPORT: data channel crypto options modified
Fri Jan  4 14:25:46 2019 Data Channel: using negotiated cipher 'AES-256-GCM'
Fri Jan  4 14:25:46 2019 Outgoing Data Channel: Cipher 'AES-256-GCM' initialized with 256 bit key
Fri Jan  4 14:25:46 2019 Incoming Data Channel: Cipher 'AES-256-GCM' initialized with 256 bit key
Fri Jan  4 14:25:46 2019 Preserving previous TUN/TAP instance: tun0
Fri Jan  4 14:25:46 2019 Initialization Sequence Completed

I think it has something to do with the nameservers not being set. I have the container DNS currently configured to "use host settings". What can I change there to make sure the container accepts the new nameservers?

Thanks a lot for your help!! :D
 
I think it has something to do with the nameservers not being set. I have the container DNS currently configured to "use host settings". What can I change there to make sure the container accepts the new nameservers?

IIRC this needs to be set somehow in the OpenVPN Server and maybe in the .ovpn config file.
 
Nice - glad the resolution worked!

Regarding the DNS-IPs:
* PVE sets /etc/resolv.conf when you boot the container (see our admin-guide https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_guest_operating_system_configuration or `man pct`)
* you can prevent this by creating an appropriate .ignore-file (man pct)
* you could set the dns-servers from your VPN for the container statically (just edit it in the GUI and enter the IPs from them)

Probably the cleanest would be to install resolvconf in the container and configure openvpn to update the nameservers when connecting (that way you don't need to manually change the configuration, should your VPN-provider change their DNS.

the debian-wiki has a description of what needs to be configured: https://wiki.debian.org/openvpn for server and client
(the up and down scripts)

Hope that helps!
 
Nevermind about the DNS issue, I was doing some other stupid thing on my side (on the Internet no one knows you're a dog, except when you ask questions :) ).

All resolved, issue was my ip definition as pointed out by Stoiko. Thanks a lot for your support fast support Stoiko and Oguz. When on a bit more cash again, will definitely buy that Proxmox VE Community subscription to support you guys!
 
Glad it worked! - And thanks! :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!