LXC Container Upgrade to Bullseye - Slow Login and AppArmor Errors

nintendo424

New Member
Jul 21, 2021
8
1
3
31
Hi, I apologize if this it not the correct forum for this.

I recently upgraded my Proxmox server to 7.0, and the upgrade was very smooth. I have one Debian LXC container (which was running Buster, built from the Debian 10 template) and upgraded it to Bullseye a little pre-emptively, and that upgrade went smoothly. However, now logging into the LXC container (both ssh or console) takes 25 seconds due to a systemd-logind error. I added Nesting to the configuration of the container which took care of the AppArmor errors on the host, but the guest still takes forever.

This is the journalctl -e output:
Code:
dbus-daemon[96]: [system] Activating via systemd: service name='org.freedesktop.login1' unit='dbus-org.freedesktop.login1.service' requested by ':1.1' (uid=0 pid=162 comm="/bin/login -p --      " label="unconfined")
systemd[1]: Starting Load Kernel Module drm...
systemd[1]: modprobe@drm.service: Succeeded.
systemd[1]: Finished Load Kernel Module drm.
systemd[1]: systemd-logind.service: Attaching egress BPF program to cgroup /sys/fs/cgroup/system.slice/systemd-logind.service failed: Invalid argument
systemd[1]: Starting User Login Management...
systemd[7310]: systemd-logind.service: Failed to set up mount namespacing: /run/systemd/unit-root/proc: Operation not permitted
systemd[7310]: systemd-logind.service: Failed at step NAMESPACE spawning /lib/systemd/systemd-logind: Operation not permitted
systemd[1]: systemd-logind.service: Main process exited, code=exited, status=226/NAMESPACE
systemd[1]: systemd-logind.service: Failed with result 'exit-code'.
systemd[1]: Failed to start User Login Management.
systemd[1]: systemd-logind.service: Scheduled restart job, restart counter is at 1.
systemd[1]: Stopped User Login Management.
systemd[1]: Starting Load Kernel Module drm...
systemd[1]: modprobe@drm.service: Succeeded.
systemd[1]: Finished Load Kernel Module drm.
systemd[1]: systemd-logind.service: Attaching egress BPF program to cgroup /sys/fs/cgroup/system.slice/systemd-logind.service failed: Invalid argument
systemd[1]: Starting User Login Management...
systemd[7314]: systemd-logind.service: Failed to set up mount namespacing: /run/systemd/unit-root/proc: Operation not permitted
systemd[7314]: systemd-logind.service: Failed at step NAMESPACE spawning /lib/systemd/systemd-logind: Operation not permitted
systemd[1]: systemd-logind.service: Main process exited, code=exited, status=226/NAMESPACE
systemd[1]: systemd-logind.service: Failed with result 'exit-code'.
systemd[1]: Failed to start User Login Management.
systemd[1]: systemd-logind.service: Scheduled restart job, restart counter is at 2.
systemd[1]: Stopped User Login Management.
systemd[1]: Starting Load Kernel Module drm...
systemd[1]: modprobe@drm.service: Succeeded.
systemd[1]: Finished Load Kernel Module drm.
systemd[1]: systemd-logind.service: Attaching egress BPF program to cgroup /sys/fs/cgroup/system.slice/systemd-logind.service failed: Invalid argument
systemd[1]: Starting User Login Management...
systemd[7318]: systemd-logind.service: Failed to set up mount namespacing: /run/systemd/unit-root/proc: Operation not permitted
systemd[7318]: systemd-logind.service: Failed at step NAMESPACE spawning /lib/systemd/systemd-logind: Operation not permitted
systemd[1]: systemd-logind.service: Main process exited, code=exited, status=226/NAMESPACE
systemd[1]: systemd-logind.service: Failed with result 'exit-code'.
systemd[1]: Failed to start User Login Management.
systemd[1]: systemd-logind.service: Scheduled restart job, restart counter is at 3.
systemd[1]: Stopped User Login Management.
systemd[1]: Starting Load Kernel Module drm...
systemd[1]: modprobe@drm.service: Succeeded.
systemd[1]: Finished Load Kernel Module drm.
systemd[1]: systemd-logind.service: Attaching egress BPF program to cgroup /sys/fs/cgroup/system.slice/systemd-logind.service failed: Invalid argument
systemd[1]: Starting User Login Management...
systemd[7322]: systemd-logind.service: Failed to set up mount namespacing: /run/systemd/unit-root/proc: Operation not permitted
systemd[7322]: systemd-logind.service: Failed at step NAMESPACE spawning /lib/systemd/systemd-logind: Operation not permitted
systemd[1]: systemd-logind.service: Main process exited, code=exited, status=226/NAMESPACE
systemd[1]: systemd-logind.service: Failed with result 'exit-code'.
systemd[1]: Failed to start User Login Management.
systemd[1]: systemd-logind.service: Scheduled restart job, restart counter is at 4.
systemd[1]: Stopped User Login Management.
systemd[1]: Starting Load Kernel Module drm...
systemd[1]: modprobe@drm.service: Succeeded.
systemd[1]: Finished Load Kernel Module drm.
systemd[1]: systemd-logind.service: Attaching egress BPF program to cgroup /sys/fs/cgroup/system.slice/systemd-logind.service failed: Invalid argument
systemd[1]: Starting User Login Management...
systemd[7326]: systemd-logind.service: Failed to set up mount namespacing: /run/systemd/unit-root/proc: Operation not permitted
systemd[7326]: systemd-logind.service: Failed at step NAMESPACE spawning /lib/systemd/systemd-logind: Operation not permitted
systemd[1]: systemd-logind.service: Main process exited, code=exited, status=226/NAMESPACE
systemd[1]: systemd-logind.service: Failed with result 'exit-code'.
systemd[1]: Failed to start User Login Management.
systemd[1]: systemd-logind.service: Scheduled restart job, restart counter is at 5.
systemd[1]: Stopped User Login Management.
systemd[1]: modprobe@drm.service: Start request repeated too quickly.
systemd[1]: modprobe@drm.service: Failed with result 'start-limit-hit'.
systemd[1]: Failed to start Load Kernel Module drm.
systemd[1]: systemd-logind.service: Start request repeated too quickly.
systemd[1]: systemd-logind.service: Failed with result 'exit-code'.
systemd[1]: Failed to start User Login Management.
dbus-daemon[96]: [system] Failed to activate service 'org.freedesktop.login1': timed out (service_start_timeout=25000ms)
login[162]: pam_systemd(login:session): Failed to create session: Failed to activate service 'org.freedesktop.login1': timed out (service_start_timeout=25000ms)
login[7331]: ROOT LOGIN  on '/dev/tty1'
 
Last edited:
hi,

can't reproduce issue here with nesting enabled unprivileged container. could you post the container config from pct config CTID?

I have one Debian LXC container (which was running Buster, built from the Debian 10 template) and upgraded it to Bullseye a little pre-emptively, and that upgrade went smoothly
what do you mean with pre-emptively? how exactly did you perform the upgrade of the container?
can you show the output from apt update inside the container?
 
Sure, here is the config:

Code:
pct config 111
arch: amd64
cores: 1
features: nesting=1
hostname: Debian
memory: 2048
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=ae:eb:50:62:7c:b1,ip=dhcp,type=veth
onboot: 1
ostype: debian
rootfs: VMs:111/vm-111-disk-0.raw,mountoptions=noatime,size=20G
startup: order=1,up=30
swap: 2048
unprivileged: 1

I meant preemptively because Bullseye hasn't officially released yet.

I performed the upgrade by
1) changing the sources.list to be the correct urls for Bullseye, including the bullseye-security change
2) running apt update and validating the upgrade
3) running apt dist-upgrade to perform the upgrade.

output of apt update:
Code:
root@Debian:~# apt update
Hit:1 http://ftp.debian.org/debian bullseye InRelease
Hit:2 http://security.debian.org bullseye-security InRelease
Hit:3 http://ftp.debian.org/debian bullseye-updates InRelease
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
All packages are up to date.
 
Another update, I built the Debian 11 LXC container from https://git.proxmox.com/?p=dab-pve-...56f65bd15e48dfb54fa0dedd;hb=refs/heads/master and I get the same issues. I believe something may be misconfigured on my Proxmox VE server after the upgrade... I'm just not sure what or where to check since most of the upgrade is automated. pve6to7 --full had no errors.

I believe this may be coming down to kernel modules not being available? It looks like it errors when trying to load modprobe@drm.
Running modprobe drm on the LXC container gives:
Code:
root@Debian:~# modprobe drm
modprobe: FATAL: Module drm not found in directory /lib/modules/5.11.22-2-pve

I ran an ls and sure enough, the folder doesn't exist.
Code:
root@Debian:~# ls -la /lib/modules
ls: cannot access '/lib/modules': No such file or directory

However, on the host, the drm module is loaded and the folder does exist:
Code:
root@Proxmox:~# ls /lib/modules/
5.11.22-1-pve  5.11.22-2-pve  5.4.124-1-pve  5.4.34-1-pve


I masked systemd-logind on the LXC container which fixed the login speed. I just wonder if that's safe.
 
Last edited:
try to set the option nesting = 1
This solves the problem for me.

where the issues seems to come from

Code:
systemd[579]: systemd-logind.service: Failed to set up mount namespacing: /run/systemd/unit-root/proc: Permission denied
systemd[579]: systemd-logind.service: Failed at step NAMESPACE spawning /lib/systemd/systemd-logind: Permission denied
 
I actually already had that set for that container, as I had read it in a separate comment thread. The host stopped showing the AppArmor denied logging, but the container still had issues. Disabling systemd-logind and the pam module for systemd took care of the issues in the container.
 
Code:
systemctl mask systemd-logind
did the trick for me.
I not know if proxmox apparmor profiles should be updated to fix this.
 
Last edited:
Same this with a Proxmox Backup Server 1.1.12 (up to date Debian 10 container) that I upgraded in-place to 2.0 (with Debian 11) following the instructions. the systemd-logind.service and the modprobe@drm.service are automatically enabled and are failing. Masking both seems to work fine. I also remove all libdrm packages (removed 100MB of mostly mesa, radeon and vulkan packages, as the container has no GPU) but that did not remove the modprobe@drm.service.
 
you don't need to mask any services inside the container - simply enabling the "nesting" feature for the container (if it is unprivileged) will allow systemd to make use of its namespacing features.
 
you don't need to mask any services inside the container - simply enabling the "nesting" feature for the container (if it is unprivileged) will allow systemd to make use of its namespacing features.
Should we run all Debian 11 (and others?) unprivileged containers with Nesting enabled? Or should we enable as few features as possible (for security)? Please advise.
EDIT: The manual states: "Best used with unprivileged containers with additional id mapping. Note that this will expose procfs and sysfs contents of the host to the guest."
 
Last edited:
yes, enabling nesting for unprivileged containers is the way to go if the systemd version in the container requires it for namespacing purposes. the latest pve-container and pve-manager versions (not yet rolled out) will allow setting the nesting feature for users with VM.Allocate privileges, and will default to enabling it on all new unprivileged containers created via the GUI.
 
yes, enabling nesting for unprivileged containers is the way to go if the systemd version in the container requires it for namespacing purposes. the latest pve-container and pve-manager versions (not yet rolled out) will allow setting the nesting feature for users with VM.Allocate privileges, and will default to enabling it on all new unprivileged containers created via the GUI.
Thanks, that worked here as well (after hours of investigation...).
 
yes, enabling nesting for unprivileged containers is the way to go if the systemd version in the container requires it for namespacing purposes.
How did you do it? I tried to enable the feature in the config of the container, but when I wanted to start the container, I got this error message:
Code:
format error nesting : property is not defined in schema and the schema does not allow additional properties
 
I catch long-time login in Debian-11@LXC too.
New container was created from downloaded template as unprivileged.

systemctl mask systemd-logind
solve issue without edit container config;

Thank you very much.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!