Apparmor in privileged container

Must be that this was simply not considered an error?
Do you really need to load apparmor profiles inside your container?

Administrating apparmor profiles requires CAP_MAC_ADMIN which is dropped by the default common lxc configuration.

You can add a config snippet to enable this:

Code:
# /usr/share/lxc/config/common.conf.d/02-stacked-apparmor.conf
# Clear this (as the main common.conf fills it with the capabilities below plus mac_admin and mac_override
lxc.cap.drop =

# Drop some harmful capabilities
lxc.cap.drop = sys_time sys_module sys_rawio

Since we do have apparmor stacking/nesting available now, this should be mostly safe (but then since you're using privileged containers, safety isn't really a thing anyway).

We could probably automate this based on the availability of stacking though. (/sys/kernel/security/apparmor/features/domain/stack must contain yes)
Thanks for the explanation.

Also i had the same issue with apparmor on privileged containers.
So i simply removed apparmor from the container.

And yes, like you said, it's an privileged container, so i think either that apparmor isn't really needed there anyway, since we don't use privileged containers for security reasons:-)

Basically we can do almost everything in Unprivileged containers and i had never issues with apparmor on Unprivileged containers.

Tho i mostly remove apparmor anyway, because i have bad experiences with apparmor in the past, with not working apps if they need access to directories outside of their default directories.
Sure you can add exceptions and etc to apparmor, but for what, it brings only a minimal security benefit in my eyes, not worth the work.

However, may i ask, why apparmor works without issues on Unprivileged containers?
Because there are even more dropped lxc capabilities.

Im just wondering, because it makes in my head no sense why it's working always on Unprivileged, while sometimes struggling with privileged containers.

Just an information question, nothing that i want to get fixed or whatever.
Im extremely happy with Proxmox how it is and very thankful anyway for all the work you guys do.

Cheers
 
i'll throw my hand in the ring here with apparmor issues trying to run a docker container in a ubuntu 20.04 LXC.

Im trying to get Frigate running and not having much luck.

Oddly, i have my standard ubuntu template that has docker installed and working etc.. but to get Frigate working, i need to update it to the latest docker version, and since then, it wont load.

Code:
level=error msg="AppArmor enabled on system bu
t the docker-default profile could not be loaded: running `/sbin/apparmor_parser apparmor_parser -Kr /var/lib/docker/tmp/docker
-default1046792531` failed with output: apparmor_parser: Unable to replace \"docker-default\".  Permission denied; attempted to
 load a profile while confined?\n\nerror: exit status 243"
 
Seems so, but this thread is actually only related to privileged containers where the apparmor service even fails to start.

However, can't you simply uninstall apparmor?
 
Seems so, but this thread is actually only related to privileged containers where the apparmor service even fails to start.

However, can't you simply uninstall apparmor?
I dont know. possibly.

I thought it was best not to, even though im using a priv. container (i think i need to to make docker + NFS mounting work so i can mount the NFS share into the docker container) that i need to use a priv. container.

I thought it was best to keep apparmor installed and there. but i may be wrong??
 
I dont know. possibly.

I thought it was best not to, even though im using a priv. container (i think i need to to make docker + NFS mounting work so i can mount the NFS share into the docker container) that i need to use a priv. container.

I thought it was best to keep apparmor installed and there. but i may be wrong??
You need to run docker basically?

Why not inside an unprivileged container?
You don't need anything special, only nested, but nested is activated by default anyway.

If your backend storage is zfs, 90% of docker containers will work, 10% not.

Those 10% won't work, basically because those docker images have too much subdirectories or too long filenames inside.

For example:
- it could be that an onlyoffice docker image won't work 50% chance. Since it's almost a full fledged ubuntu/debian based container
- Speedtest-tracker won't work 100%, same reason as above.
- plex / jellyfin / paperless-ngx / heimdall / portainer / traefik / nginx proxy manager / and many more, will work without any issues.

But you can make everything working with either lvm as backend storage, or simply creating an ext4 dataset.
I made once a small howto about ext4 datasers.
 
You need to run docker basically?

Why not inside an unprivileged container?
You don't need anything special, only nested, but nested is activated by default anyway.

If your backend storage is zfs, 90% of docker containers will work, 10% not.

Those 10% won't work, basically because those docker images have too much subdirectories or too long filenames inside.

For example:
- it could be that an onlyoffice docker image won't work 50% chance. Since it's almost a full fledged ubuntu/debian based container
- Speedtest-tracker won't work 100%, same reason as above.
- plex / jellyfin / paperless-ngx / heimdall / portainer / traefik / nginx proxy manager / and many more, will work without any issues.

But you can make everything working with either lvm as backend storage, or simply creating an ext4 dataset.
I made once a small howto about ext4 datasers.
I cant remember exactly, but there was some reason i needed to go priv. LXC so that i could get it to work.

My setup is

PVE 7.1.7
- ZFS main PVE array (i found a command that fixed the docker / zfs issues though looking around)
- Bunch of LXCs (some running dockerised apps, some not)
- Truenas Core
- - - Bunch of drives over multiple datasets and pools exposing a variety of SMB and NFS shares
- - - Truenas doing its truenas thing


To get the LXCs to mount the NFS share directly, OR to get the ones running docker that i also needed to mount the NFS shares in, i needed to go priv. else it wouldnt work. I cant remember exactly. But im happy to take guidence and change things if im going about it all wrong.

I remember thinking that it shouldnt be as hard as it is to do what i want to do.

I do remember i had no end of troubles with docker filling up its allotted storage until i found that command to put into the LXC.conf and its been great ever since.
 
I cant remember exactly, but there was some reason i needed to go priv. LXC so that i could get it to work.

My setup is

PVE 7.1.7
- ZFS main PVE array (i found a command that fixed the docker / zfs issues though looking around)
- Bunch of LXCs (some running dockerised apps, some not)
- Truenas Core
- - - Bunch of drives over multiple datasets and pools exposing a variety of SMB and NFS shares
- - - Truenas doing its truenas thing


To get the LXCs to mount the NFS share directly, OR to get the ones running docker that i also needed to mount the NFS shares in, i needed to go priv. else it wouldnt work. I cant remember exactly. But im happy to take guidence and change things if im going about it all wrong.

I remember thinking that it shouldnt be as hard as it is to do what i want to do.

I do remember i had no end of troubles with docker filling up its allotted storage until i found that command to put into the LXC.conf and its been great ever since.

Bind mounts, yes, that's the reason why you use privileged containers.
But bind mounts work with an Unprivileged tbh either, just minimal more finicky, since you meed to map users, because unprivileged containers starts with user/gids above 100000.

However that's another topic.

1. I strongly suggest you to update to pve7.4, i think since pve7.4 almost all my docker issues gone away.
2. The easiest method, simply uninstall apparmor.
The reason is simple, fixing apparmor is a pain in the ass + you run docker and the docker images probably anyway as root in your privileged lxc container.
So apparmor won't provide any additional security anyway.

3. You could still run into issues with some docker images, that won't start.
Everything that worked fine till now or works, will still work, so you don't need to have any headaches.
But when you test more and more docker images, you will probably come across the one or another, that doesn't work with zfs as backend storage.

But there is an really easy solution, creating an zfs dataset with a fixed size like 50 or 100gb and format that zfs dataset with ext4.
Mount it somewhere, like /mnt/pve/docker.
Then mount it as directory for containers in the datacenter view, and don't forget to add that mount to fstab to be persistent...
And simply move the lxc container storage on that new ext4 dataset storage.

The beauty of that way is, that that ext4 dataset storage, behaves like a normal dynamic dataset...
Means consuming only the space it's actually filled with.

I wrote an how-to here in the forums to some docker thread either, bug hell i don't want to search, forgive me :-)
But there is everything step by step with commands.

However, i wouldn't bother with that either, as long as you didn't came across docker images that don't work....

Just remove apparmor and you're fine, i hope!

Cheers
 
I spent about 10 hours on this frustratingly over 2 days. I have found a solution:

Go into shell on your host
In the individual lxc conf-file eg. /etc/pve/lxc/100.conf

add the following lines:

lxc.apparmor.profile: unconfined
lxc.cgroup2.devices.allow: a
lxc.cap.drop:

And reboot your lxc, or just stop your lxc and then start it after editing.
You dont even need to remove or mess with apparmor, it just basically disables it.
Yeah there is security implications, but you are using a privileged lxc anyways.

docker and all containers working 100% now and adding new ones not affect by any issues.
 
I spent about 10 hours on this frustratingly over 2 days. I have found a solution:

Go into shell on your host
In the individual lxc conf-file eg. /etc/pve/lxc/100.conf

add the following lines:

lxc.apparmor.profile: unconfined
lxc.cgroup2.devices.allow: a
lxc.cap.drop:

And reboot your lxc, or just stop your lxc and then start it after editing.
You dont even need to remove or mess with apparmor, it just basically disables it.
Yeah there is security implications, but you are using a privileged lxc anyways.

docker and all containers working 100% now and adding new ones not affect by any issues.
you have saved me hours of troubleshooting if I could I'd buy you a coffee thanks.

and just to be clear for anyone in the future who's running multiple containers like me

`/etc/pve/lxc/100.conf` is the config file of your container you want to fix in my case it was `105.conf`
 
  • Like
Reactions: gerni
@AshenVerdict I have exactly the same problem but I can't find the /etc/pve/lxc/100.conf folder. I'm using ubuntu and I installed docker via command prompt:

apt-get install -y docker-ce docker-ce-cli containerd.io
curl -L "https://github.com/docker/compose/releases/download/2.24.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

Apart from that,
When I type “docker-compose up -d” the following errors pop up. I have no idea how to solve this I am using the latest version of ubuntu, docker, apparmor.
1707317867431.png
I reinstalled apparmor because that’s what I read on other forums but it still doesn’t do anything…
I have never worked in a vps environment, I am completely unfamiliar with it. I took the guide from this link on github.
https://github.com/mawburn/portaler-core/blob/main/docs/selfhosting.md

Have a nice day :).
 
Hello there,
I am sorry I need to be enlightened.
Actually what I have is that :

#PBS:/usr/share/lxc/config/common.conf.d# ls
00-lxcfs.conf 01-pve.conf README

Should I create 02-stacked-apparmor.conf ? If so then, I have to paste : lxc.cap.drop = sys_time sys_module sys_rawio
Right?
Thanks
I actually just changed the line in the common.conf file using this command on the proxmox host:

Bash:
sudo sed -i 's|mac_admin\ mac_override\ sys_time\ sys_module\ sys_rawio|sys_time\ sys_module\ sys_rawio|g' /usr/share/lxc/config/common.conf

basically as he said you need to remove the mac_admin and mac_override params from the lxc.cap.drop line.

after that just restarted my container and it worked like a charm
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!