Thanks for the help.great, i was wondering why it didn't work for you since i have a couple of containers with this same configuration.
glad your problem is solved!
Thanks for the help.great, i was wondering why it didn't work for you since i have a couple of containers with this same configuration.
glad your problem is solved!
statx(AT_FDCWD, "/sys/fs/cgroup/net_cls", AT_STATX_SYNC_AS_STAT, STATX_ALL, {stx_mask=STATX_BASIC_STATS|0x1000, stx_attributes=0, stx_mode=S_IFDIR|0755, stx_size=0, ...}) = 0
mount("net_cls", "/sys/fs/cgroup/net_cls", "cgroup", 0, "net_cls") = -1 EPERM (Operation not permitted)
futex(0x7fcc00000ce8, FUTEX_WAKE_PRIVATE, 1) = 1
close(9) = 0
write(1, "[2021-07-08 15:47:57.063][mullva"..., 197[2021-07-08 15:47:57.063][mullvad_daemon][ERROR] Error: Unable to initialize daemon
Caused by: Unable to initialize split tunneling
Caused by: Unable to initialize net_cls cgroup instance
) = 197
write(1, "Caused by: EPERM: Operation not "..., 42Caused by: EPERM: Operation not permitted
) = 42
can you show us the container config?Sorry to revive an old (but very useful) thread. I was using the technique described in it to enable VPN usage in an LXC container. However, yesterday I just updated to Proxmox 7, after which it no longer seems to work. I read somewhere else that enabling nesting (Container, Options, Features) might help, and did so but nothing changed. Has anyone else encountered the same thing, and if so were you able to solve it?
For what it's worth, I'm running the Mullvad VPN client in a container running Ubuntu 20.04. After the upgrade, I noticed that the mullvad-daemon service was no longer running. I looked at various logs, etc. and ran strace on the daemon and I think this is indicative of the cause, but not quite sure how to fix:
Code:statx(AT_FDCWD, "/sys/fs/cgroup/net_cls", AT_STATX_SYNC_AS_STAT, STATX_ALL, {stx_mask=STATX_BASIC_STATS|0x1000, stx_attributes=0, stx_mode=S_IFDIR|0755, stx_size=0, ...}) = 0 mount("net_cls", "/sys/fs/cgroup/net_cls", "cgroup", 0, "net_cls") = -1 EPERM (Operation not permitted) futex(0x7fcc00000ce8, FUTEX_WAKE_PRIVATE, 1) = 1 close(9) = 0 write(1, "[2021-07-08 15:47:57.063][mullva"..., 197[2021-07-08 15:47:57.063][mullvad_daemon][ERROR] Error: Unable to initialize daemon Caused by: Unable to initialize split tunneling Caused by: Unable to initialize net_cls cgroup instance ) = 197 write(1, "Caused by: EPERM: Operation not "..., 42Caused by: EPERM: Operation not permitted ) = 42
Any suggestions would be most appreciated.
Happy to. Here it is:can you show us the container config?
arch: amd64
cores: 2
hostname: test3
memory: 8192
mp0: /data/test3,mp=/test3
mp1: /data/temp/test3,mp=/temp
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=CE:A7:99:00:A8:0F,ip=dhcp,ip6=dhcp,tag=20,type=veth
ostype: ubuntu
rootfs: rpool2:subvol-102-disk-0,size=64G
swap: 8192
unprivileged: 1
lxc.mount.entry: /devcontainer/net dev/net none bind,create=dir
lxc.cgroup.devices.allow: c 10:200 rwm
lxc.idmap: u 0 100000 2000
lxc.idmap: g 0 100000 2000
lxc.idmap: u 2000 2000 1
lxc.idmap: g 2000 2000 1
lxc.idmap: u 2001 102001 63535
lxc.idmap: g 2001 102001 63535
change this part tolxc.cgroup.devices.allow: c 10:200 rwm
lxc.cgroup2.devices.allow: c 10:200 rwm
and restart the containerThanks very much for the suggestion Oguz - most appreciated. I made the change and restarted the container, but unfortunately it seems to have made no difference at all. I continue to see the exact same error.change this part tolxc.cgroup2.devices.allow: c 10:200 rwm
and restart the container
did you change the permissions of /dev/net/tun?Thanks very much for the suggestion Oguz - most appreciated. I made the change and restarted the container, but unfortunately it seems to have made no difference at all. I continue to see the exact same error.
No, after I continued to get the error and didn't have a solution, I tried a different solution altogether using Wireguard and its documentation for implementation on a container. Was a bit of a pain but eventually got it work.did you change the permissions of /dev/net/tun?
using cgroup2 solved my issues (also after migrating to a PVE 7.1 node)