Mullvad VPN Daemon issues

skeerrt

New Member
Jan 11, 2020
3
0
1
27
Operating system: Host: Proxmox 7, Debian 11 | Containers: Debian 10, 11; Ubuntu 18.04, 20.04

App version: 2021.01-04

I can't get the daemon to start, and I've already emailed the support team. Their response was to use wireguard/openvpn but each have their problems (wireguard blocks LAN, openvpn resets tracker connection daily) and I'm still relatively new to linux. The CLI app was working with zero issues before upgrading to proxmox 7

Error message

Code:
[mullvad_daemon][ERROR] Error: Unable to initialize daemon
Caused by: Unable to initialize split tunneling
Caused by: Unable to initialize net_cls cgroup instance
Caused by: EPERM: Operation not permitted
[mullvad_daemon][DEBUG] Process exiting with code 1
 
hi,

Curious, if I’ve created a new container after the install of proxmox 7, shouldn’t it automatically implement cgroup2 instead of the older version? At least that’s the way I read the documentation
yes PVE7 automatically chooses cgroupv2 (which is what you should use), however it won't automatically change the lines in your container configuration to reflect that change :)

So I followed the documentation, changed the line in grub & I'm still having the same issues. Whether its v1 or v2, the daemon won't start.
you don't have to change cgroup version of the host at all. you can revert to cgroupv2 and change the container configuration (that is located in /etc/pve/lxc/CTID.conf where CTID is the ID of your container)

change the cgroup key entries in that file to cgroup2 (when your host is also in cgroupv2 setting), then the VPN daemon inside the container should hopefully work
 
Anybody found a solution to this issue? I am just trying out Mullvad in a LXC and having the same problem.

Code:
# cat /etc/pve/lxc/101.conf
arch: amd64
cores: 2
features: nesting=1
hostname: qbit.home.lan
memory: 6144
mp0: /mnt/store/downloads/,mp=/mnt/downloads
mp1: /mnt/media/,mp=/mnt/media
nameserver: 192.168.50.200
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.50.1,hwaddr=26:47:E0:74:42:9D,ip=192.168.50.120/24,type=veth
onboot: 0
ostype: debian
rootfs: ssdzfs:subvol-101-disk-0,size=8G
searchdomain: home.lan
swap: 512
unprivileged: 1
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net dev/net none bind,create=dir

I was using another service earlier that used OpenVPN, and that above config worked fine.

Running on PVE 7.1-10
 
Code:
I was using another service earlier that used OpenVPN, and that above config worked fine.
have you checked the permissions and owner of the /dev/net/tun on the PVE host?

try comparing the outputs from the PVE and inside the container:
Code:
ls -nal /dev/net/tun

you might need to set the owner with chown or give the file read/execute permissions via chmod
 
I use mullvad but I've got pfSense as a VM managing my connections, works very well for me
 
have you checked the permissions and owner of the /dev/net/tun on the PVE host?

try comparing the outputs from the PVE and inside the container:
Code:
ls -nal /dev/net/tun

you might need to set the owner with chown or give the file read/execute permissions via chmod
LXC:

Code:
# ls -nal /dev/net/tun
crw-rw-rw- 1 65534 65534 10, 200 Jan  7 19:18 /dev/net/tun

PVE:

Code:
# ls -nal /dev/net/tun
crw-rw-rw- 1 0 0 10, 200 Jan  7 19:18 /dev/net/tun

What perms are required exactly?
 
LXC:

Code:
# ls -nal /dev/net/tun
crw-rw-rw- 1 65534 65534 10, 200 Jan  7 19:18 /dev/net/tun

PVE:

Code:
# ls -nal /dev/net/tun
crw-rw-rw- 1 0 0 10, 200 Jan  7 19:18 /dev/net/tun

What perms are required exactly?
Ok, I think I understand what you're saying now. I fixed the perms.

LXc:

Code:
# ls -nal /dev/net/tun
crw-rw-rw- 1 0 0 10, 200 Jan  7 19:18 /dev/net/tun

PVE:

Code:
# ls -nal /dev/net/tun
crw-rw-rw- 1 100000 100000 10, 200 Jan  7 19:18 /dev/net/tun

Still having the same issue.

Code:
# mullvad  connect
Error: Management interface error
Caused by: Management RPC server or client error
Caused by: transport error
Caused by: error trying to connect: No such file or directory (os error 2)
Caused by: No such file or directory (os error 2)
# systemctl status mullvad-daemon
* mullvad-daemon.service - Mullvad VPN daemon
     Loaded: loaded (/opt/Mullvad VPN/resources/mullvad-daemon.service; enabled; vendor preset: enabled)
     Active: failed (Result: exit-code) since Mon 2022-02-07 16:42:29 CST; 2min 35s ago
    Process: 384 ExecStart=/opt/Mullvad VPN/resources/mullvad-daemon -v --disable-stdout-timestamps (code=exited, status=1/FAILURE)
   Main PID: 384 (code=exited, status=1/FAILURE)
        CPU: 34ms

Feb 07 16:42:28 qbit systemd[1]: mullvad-daemon.service: Main process exited, code=exited, status=1/FAILURE
Feb 07 16:42:28 qbit systemd[1]: mullvad-daemon.service: Failed with result 'exit-code'.
Feb 07 16:42:29 qbit systemd[1]: mullvad-daemon.service: Scheduled restart job, restart counter is at 5.
Feb 07 16:42:29 qbit systemd[1]: Stopped Mullvad VPN daemon.
Feb 07 16:42:29 qbit systemd[1]: mullvad-daemon.service: Start request repeated too quickly.
Feb 07 16:42:29 qbit systemd[1]: mullvad-daemon.service: Failed with result 'exit-code'.
Feb 07 16:42:29 qbit systemd[1]: Failed to start Mullvad VPN daemon.
 
I'm having exactly the same issue with a Debian 10 LXC running the Mullvad client when migrating from PVE 6.4-13 to 7.1-10. The Mullvad client is using wireguard; I've tried running wireguard directly and get similar results.

Code:
arch: amd64
cores: 1
hostname: vpn-gateway
memory: 128
nameserver: 193.138.218.74 10.8.0.1
net0: name=eth0,bridge=vmbr0,gw=192.168.2.1,hwaddr=9E:A8:0C:70:D6:7C,ip=192.168.2.26/24,type=veth
net1: name=eth1,bridge=vmbr2,hwaddr=F6:9C:08:66:E8:6A,ip=192.168.3.1/24,type=veth
onboot: 1
ostype: debian
rootfs: local-zfs:subvol-108-disk-0,size=8G
swap: 0
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.hook.autodev: sh -c "modprobe tun; cd ${LXC_ROOTFS_MOUNT}/dev; mkdir net; mknod net/tun c 10 200; chmod 0666 net/tun"
 
Same here :(
My configuration looks like this:

Code:
arch: amd64
cores: 1
hostname: mullvad
memory: 512
mp0: /storage/media,mp=/shared
net0: name=eth0,bridge=vmbr0,gw=10.10.10.1,hwaddr=9E:18:AF:A0:A2:9B,ip=10.10.105.12/24,type=veth
net1: name=eth1,bridge=vmbr1,hwaddr=76:10:0A:80:9A:81,ip=10.0.0.99/24,type=veth
ostype: ubuntu
rootfs: disks:vm-105-disk-0,size=250G
swap: 512
unprivileged: 1
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file

I get the following error:
Code:
Jun 18 17:29:52 mullvad mullvad-daemon[442]: [talpid_core::tunnel_state_machine][DEBUG] Exiting tunnel state machine loop
Jun 18 17:29:52 mullvad mullvad-daemon[442]: [mullvad_daemon::management_interface][INFO] Management interface shut down
Jun 18 17:29:52 mullvad mullvad-daemon[442]: [mullvad_daemon][ERROR] Error: Unable to initialize daemon
Jun 18 17:29:52 mullvad mullvad-daemon[442]: Caused by: Unable to initialize split tunneling
Jun 18 17:29:52 mullvad mullvad-daemon[442]: Caused by: Unable to initialize net_cls cgroup instance
Jun 18 17:29:52 mullvad mullvad-daemon[442]: Caused by: EPERM: Operation not permitted
Jun 18 17:29:52 mullvad mullvad-daemon[442]: [mullvad_daemon][DEBUG] Process exiting with code 1
 
The Mullvad app creates some cgroup v1 controllers. This tricks Proxmox into thinking that it's in hybrid mode (see [pve-common.git] / src / PVE / CGroup.pm), so it sets the cgroupv2 path to `/sys/fs/cgroup/unified`, which does not exist because Proxmox-VE 7.x is cgroup2-only. Not sure if this hybrid mode logic can be excised now that Proxmox is cgroups2-only.
 
I'm not 100% sure, but looking over this stackoverflow answer, I think that can be mounted as such:

$ sudo mkdir /sys/fs/net-cls-v1
$ sudo mount -t cgroup -o net_cls none /sys/fs/net-cls-v1

And then tell Mullvad where to find it with that `TALPID_NETCLS_MOUNT_DIR` environment variable.
$ TALPID_NETCLS_MOUNT_DIR="/sys/fs/net-cls-v1"

For the container, I think this will work when added to `/etc/pve/lxc/[id].conf`:
lxc.mount.entry = /sys/fs/net-cls-v1 sys/fs/net-cls-v1 none bind,optional 0 0
lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file

And then set the TALPID_NETCLS_MOUNT_DIR in the container.

Untested, but that's my guess. That mount entry might need `create=file` at the end.

Edit: And I think Mullvad will need to be installed on both the Proxmox host and the container.
 
Last edited:
New Error:

Code:
[2022-07-26 11:29:22.555][mullvad_daemon][ERROR] Error: Unable to initialize daemon
Caused by: Unable to initialize split tunneling
Caused by: Unable to create cgroup for excluded processes
Caused by: Permission denied (os error 13)
so I guess there are still some v1 cgroups missing

Btw. I couldn't mount net_cls in /sys/fs/, so I used /tmp, even root doesn't have permission to create directories in /sys/fs/.

We're now stuck here: https://github.com/mullvad/mullvadv...055/talpid-core/src/split_tunnel/linux.rs#L27

Could this be a permission error?
It is a permission error!
After:
Code:
sudo chmod -R 777 /tmp/net-cls-v1/

Code:
ls -aln /tmp/net-cls-v1/
total 4
drwxrwxrwx  2 0 0    0 Jul 26 14:18 .
drwxrwxrwt 10 0 0 4096 Jul 26 13:13 ..
-rwxrwxrwx  1 0 0    0 Jul 26 14:18 cgroup.clone_children
-rwxrwxrwx  1 0 0    0 Jul 26 14:18 cgroup.procs
-rwxrwxrwx  1 0 0    0 Jul 26 14:18 cgroup.sane_behavior
-rwxrwxrwx  1 0 0    0 Jul 26 14:18 net_cls.classid
-rwxrwxrwx  1 0 0    0 Jul 26 14:18 notify_on_release
-rwxrwxrwx  1 0 0    0 Jul 26 14:18 release_agent
-rwxrwxrwx  1 0 0    0 Jul 26 14:18 tasks

Mullvad is running!

Code:
sudo /opt/Mullvad\ VPN/resources/mullvad-daemon -v
[2022-07-26 12:19:08.562][mullvad_daemon::version][INFO] Starting mullvad-daemon - 2022.3-beta2 2022-06-29
[2022-07-26 12:19:08.572][mullvad_daemon][INFO] Logging to /var/log/mullvad-vpn
[2022-07-26 12:19:08.600][mullvad_daemon::rpc_uniqueness_check][DEBUG] Failed to locate/connect to another daemon instance, assuming there isn't one
[2022-07-26 12:19:08.609][mullvad_daemon][INFO] Management interface listening on /var/run/mullvad-vpn
[2022-07-26 12:19:08.615][mullvad_api::address_cache][DEBUG] Loading API addresses from /var/cache/mullvad-vpn/api-ip-address.txt
[2022-07-26 12:19:08.618][mullvad_api::address_cache][DEBUG] Using API address: 45.83.222.100:443
[2022-07-26 12:19:08.619][mullvad_api::availability][DEBUG] Suspending API requests
[2022-07-26 12:19:08.631][mullvad_daemon::settings][INFO] Loading settings from /etc/mullvad-vpn/settings.json
[2022-07-26 12:19:08.638][mullvad_relay_selector][DEBUG] Reading relays from /var/cache/mullvad-vpn/relays.json
[2022-07-26 12:19:08.639][mullvad_relay_selector][DEBUG] Reading relays from /opt/Mullvad VPN/resources/relays.json
[2022-07-26 12:19:08.768][mullvad_relay_selector][INFO] Initialized with 857 cached relays from 2022-06-29 14:03:15.000
[2022-07-26 12:19:08.780][mullvad_api::availability][DEBUG] Pausing background API requests
[2022-07-26 12:19:08.781][mullvad_daemon::account_history][INFO] Opening account history file in /etc/mullvad-vpn/account-history.json
[2022-07-26 12:19:08.793][mullvad_daemon::target_state][DEBUG] No cached target state to load
[2022-07-26 12:19:08.811][talpid_core::firewall][INFO] Resetting firewall policy
[2022-07-26 12:19:08.813][talpid_core::firewall::imp][DEBUG] Removing table and chain from netfilter
[2022-07-26 12:19:08.817][mullvad_daemon::version_check][DEBUG] Loading version check cache from /var/cache/mullvad-vpn/version-info.json
[2022-07-26 12:19:08.818][mullvad_daemon::version_check][WARN] Error: Unable to load cached version info
Caused by: Failed to open app version cache file for reading
Caused by: No such file or directory (os error 2)
[2022-07-26 12:19:08.821][mullvad_api::availability][DEBUG] Unsuspending API requests
^C[2022-07-26 12:19:20.550][mullvad_daemon::shutdown::platform][DEBUG] Process received signal: [Int]
[2022-07-26 12:19:20.553][mullvad_daemon::device][DEBUG] Account manager has stopped
[2022-07-26 12:19:20.554][talpid_core::dns][INFO] Resetting DNS
[2022-07-26 12:19:20.557][talpid_core::tunnel_state_machine][DEBUG] Exiting tunnel state machine loop
[2022-07-26 12:19:20.560][talpid_core::tunnel_state_machine][INFO] Tunnel state machine shut down
[2022-07-26 12:19:20.561][mullvad_daemon::management_interface][INFO] Management interface shut down
[2022-07-26 12:19:20.562][mullvad_daemon][INFO] Mullvad daemon is quitting
[2022-07-26 12:19:21.064][mullvad_daemon][DEBUG] Process exiting with code 0

Any Ideas how to fix the permissions of the mount?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!