run nftables in debian 12 lxc-container

lethargos

Well-Known Member
Jun 10, 2017
142
6
58
74
Hello,

I'm trying to run nftables to do some routing inside an lxc-container, but I keep getting this error:
audit: type=1400 audit(1711923842.917:224): apparmor="DENIED" operation="mount" class="mount" info="failed perms check" error=-13 profile="lxc-2000_</var/lib/lxc>" name="/run/systemd/unit-root/" pid=429132 comm="(nft)" srcname="/" flags="rw, rbind"
nftables seems to be installed by default in the Debian 12 lxc-container image, so I'm not sure why this isn't working out of the box. In any case, what would be the most sensible way to solve this?

Running nftables commands does work. I was able to actually add rules and I've tested it and it works as expected inside the container.
 
Last edited:
Is there a specific command that is triggering this error?
 
I should have mentioned it from the beginning, I'm not sure how it slipped. When I start the nftables service ("systemctl start nftables"), I got the above-mentioned error. Inside the container I got:
nftables.service: Failed to set up mount namespacing: Permission denied
nftables.service: Failed at step NAMESPACE spawning /usr/sbin/nft: Permission denied
I was able to circumvent this by enabling nesting on the container. But I think this gives too many permissions, especially given the fact that nftables otherwise works. So it's only enabling the service that it's problematic. This could theoretically be replaced with a bash dumb script that loads the nftables rules and I wouldn't have to add anything else.

This is the container config:
arch: amd64
cores: 2
features: nesting=1
hostname: nebula-vpn0
memory: 1024
nameserver: 9.9.9.9 149.112.112.112
net0: name=eth0,bridge=vmbr1,gw=10.99.0.1,hwaddr=BC:24:11:B3:14:A8,ip=10.99.0.10/24,type=veth
onboot: 1
ostype: debian
rootfs: local-lvm:vm-2000-disk-0,size=20G
swap: 512
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net dev/net none bind,create=dir
I'm using the lxc mount and cgroup directives so that my vpn (nebula) can create a vpn interface.

Any ideas how I could find a more elegant or sensible solution?
 
Last edited:
I have multiple deb12 lxcs currently running nftables with no issues. I have both priv/unpriv lxcs. The nesting featuring (so far for me) is what helps reduce the lag when ssh into the lxc.
 
Because nesting is not needed to make nftables work in debian 12 lxc...at least not in my lxcs. I only use nesting (for the most part) to reduce the lag in ssh/cli/terminal sessions.

ss1.png

ss2.png

ss3.png
 
I see. That's interesting, yes. I would then like to understand where the problem might be.

might it be related to those entries in your container config?

Code:
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net dev/net none bind,create=dir

Could you try without them?
 
Yes, I've actually already tried this, but forgot to mention. So removing these lines and disabling nesting will result in the same apparmor error in the host syslog and permission denied/NAMESPACE-related error inside the container.
 
Your error suggest apparmor is the culprit. Where it is could be anyones guess. Are you using apparmor profiles somewhere that might be associated to this particular lxc? Are you able to get nftables started by just creating a vanilla lxc without any other custom configs?
 
Let me then offer a little bit of context. This is a newly installed Proxmox instance. I started with 8.1.4 (if I remember correctly) then upgraded to 8.1.10.
I did play a little bit with lxc profiles in order to get that access to the network interface for nebula (as mentioned in post #3), but what I did was pretty straightforward (easily reproducible). I copied "/etc/apparmor.d/lxc/lxc-default-cgns" to "/etc/apparmor.d/lxc/2000-lxc", added some mount directive (mount options=(rw, rbind) /->/run/systemd/unit-root/,) to this file, then referenced this file in 2000.conf (lxc.apparmor.profile: 2000-lxc). It turned out to be a silly idea, it didn't work. I've deleted this reference, I've also deleted the newly-created file ("/etc/apparmor.d/lxc/2000-lxc") and ended up with the solution mentioned above with nesting enabled. Actually nesting on its own seems to work even if I delete the last two lines:

Code:
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net dev/net none bind,create=dir

In any case, the apparmor profiles are untouched, as far as I can tell (and I clearly haven't directly changed any files that already existed).

Just to make sure that I'm not missing something, I've created a completely new container with Debian 12, it isn't unprivileged, but it also doesn't use nesting. I get the exact same error (this time the ID is 200):

Code:
[Fri Apr  5 15:26:25 2024] audit: type=1400 audit(1712320015.105:478): apparmor="DENIED" operation="mount" class="mount" info="failed perms check" error=-13 profile="lxc-100_</var/lib/lxc>" name="/run/systemd/unit-root/" pid=2039918 comm="(nft)" srcname="/" flags="rw, rbind"

I've also just tested it on a completely different Proxmox instance which also runs 8.1.10. Again, I uncheck 'unprivileged', there's also no nesting, on that host I get:
Code:
[Fri Apr  5 15:37:10 2024] audit: type=1400 audit(1712320645.061:35): apparmor="DENIED" operation="mount" class="mount" info="failed perms check" error=-13 profile="lxc-106_</var/lib/lxc>" name="/run/systemd/unit-root/" pid=2181174 comm="(d-logind)" srcname="/" flags="rw, rbind"

On yet another instance (this time version 8.1.4) I've tested it with the same settings, same result.

So the idea that it's "anyone's guess" doesn't hold for me, because this doesn't seem to be something specific to whatever I've done. Maybe there are some updates implemented by proxmox in the latest version(s), I'm not sure.

[Later edit:]
I've tested another container image on the Proxmox ver 8.1.4 instance, i.e. ubuntu 22.04-1 (available in the template list), and this works without nesting. It also work as an unprivileged container without nesting. So the image seems to be the most important factor here.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!