[SOLVED] Problems using a mount point and lxc.idmap

landei

New Member
Jan 7, 2020
24
3
3
36
Big Edit:

Hi,

I found this tutorial on how to mount directories to a container: https://pve.proxmox.com/wiki/Unprivileged_LXC_containers

Consider the following case:
One container has two users named "a" and "b".
The same users exist on the host system.

They use the following ids:
LXC:
a: uid=1000; gid=1000
b: uid: 109; gid=115
Host:
a: uid=1010; gid=1010
b: uid=1011; gid=1011

My config files looks like this:
Code:
arch: amd64
cores: 2
hostname: dummy
memory: 1024
mp0: /megaraid/sb/dummy,mp=/mnt/dummy
net0: name=eth0,bridge=vmbr0,firewall=1,gw=141.241.31.33,hwaddr=0A:F1:5E:B1:70:$
ostype: ubuntu
rootfs: local-zfs:subvol-101-disk-0,size=30G
swap: 1024
unprivileged: 1
lxc.idmap: u 0 100000 109
lxc.idmap: g 0 100000 115
lxc.idmap: u 109 1011 1
lxc.idmap: g 115 1011 1
lxc.idmap: u 110 100110 65426
lxc.idmap: g 116 100116 65420

/etc/subuid:
Code:
root:1011:1
root:1010:1
root:100000:65536
a:362144:65536
b:493216:65536

/etc/subgid:
Code:
root:1011:1
root:1010:1
root:100000:65536
a:362144:65536
b:493216:65536

This mapping relects only user b and is not working correct.
The container starts and afterwards on the lcx all internal folders belonging to user b are owned by nobody. After removing the lxc.idmap lines, this is fixed but then the mapping between host and lxc is missing again.

What is the problem here and how would a correct mapping for users a and b be?

Thank you very much!
 
Last edited:
Hi,
I think by creating the mapping, the user will have the new ID from the containers perspective, so the files that belonged previously to that user won't have a matching ID anymore. What you can try is changing the owner+group to the host IDs. You can't (easily) do that from within the container, but on the host:
Code:
pct mount <ID>
cd /var/lib/lxc/<ID>/rootfs
chown <UID on host>:<GID on host> -R path/to/dir other/file
cd
pct unmount <ID>
and then starting with the mapping defined, the files should show up as belonging to the correct user.
 
Thank you very much!

We tried it and something went wrong and broke the container.
We could not start it anymore and got "unable to parse value of 'net0' - format error".
This issue has been solved by restoring a backup.

Now I did something different.
In the nice Proxmox documentation the mapping example is for a 1:1 mapping.

So I changed the uid and gid on the LCX so that on both systems the same user has the same gid and uid.
My new LCX mapping is now:

Code:
lxc.idmap: u 0 100000 1011
lxc.idmap: g 0 100000 1011
lxc.idmap: u 1011 1011 1
lxc.idmap: g 1011 1011 1
lxc.idmap: u 1012 101012 64524
lxc.idmap: g 1012 101012 64524

Now I got a new strange problem.

Without the mapping present the container starts and works fine.
With the mapping present the container starts, and the login console is shown, asking for username and password.
Hoewever, nobody can log in as long the mapping is present.

Why could this be?

This is the log of "lxc-start -F -n 101" (shortened because of therad character limit):

Code:
lxc-start: 101: cgroups/cgfsng.c: mkdir_eexist_on_last: 1287 File exists - Faile                                                                                                              d to create directory "/sys/fs/cgroup/unified//lxc/101"
                                                       lxc-start: 101: cgroups/c                                                                                                              gfsng.c: container_create_path_for_hierarchy: 1336 Failed to create cgroup "/sys                                                                                                              /fs/cgroup/unified//lxc/101"
                            lxc-start: 101: cgroups/cgfsng.c: cgfsng_payload_cre                                                                                                              ate: 1496 Failed to create cgroup "/sys/fs/cgroup/unified//lxc/101"
                                                                   lxc-start: 10                                                                                                              1: cgroups/cgfsng.c: mkdir_eexist_on_last: 1287 File exists - Failed to create d                                                                                                              irectory "/sys/fs/cgroup/unified//lxc/101-1"
                                            lxc-start: 101: cgroups/cgfsng.c: co                                                                                                              ntainer_create_path_for_hierarchy: 1336 Failed to create cgroup "/sys/fs/cgroup/                                                                                                              unified//lxc/101-1"
                   lxc-start: 101: cgroups/cgfsng.c: cgfsng_payload_create: 1496                                                                                                               Failed to create cgroup "/sys/fs/cgroup/unified//lxc/101-1"
                                                            lxc-start: 101: cgro                                                                                                              ups/cgfsng.c: mkdir_eexist_on_last: 1287 File exists - Failed to create director                                                                                                              y "/sys/fs/cgroup/unified//lxc/101-2"
                                     lxc-start: 101: cgroups/cgfsng.c: container                                                                                                              _create_path_for_hierarchy: 1336 Failed to create cgroup "/sys/fs/cgroup/unified                                                                                                              //lxc/101-2"
            lxc-start: 101: cgroups/cgfsng.c: cgfsng_payload_create: 1496 Failed                                                                                                               to create cgroup "/sys/fs/cgroup/unified//lxc/101-2"
                                                     lxc-start: 101: cgroups/cgf                                                                                                              sng.c: mkdir_eexist_on_last: 1287 File exists - Failed to create directory "/sys                                                                                                              /fs/cgroup/unified//lxc/101-3"
                              lxc-start: 101: cgroups/cgfsng.c: container_create                                                                                                              _path_for_hierarchy: 1336 Failed to create cgroup "/sys/fs/cgroup/unified//lxc/1                                                                                                              01-3"
     lxc-start: 101: cgroups/cgfsng.c: cgfsng_payload_create: 1496 Failed to cre                                                                                                              ate cgroup "/sys/fs/cgroup/unified//lxc/101-3"
                                              lxc-start: 101: conf.c: lxc_setup_                                                                                                              boot_id: 3527 Permission denied - Failed to mount /dev/.lxc-boot-id to /proc/sys                                                                                                              /kernel/random/boot_id
                      systemd 237 running in system mode. (+PAM +AUDIT +SELINUX                                                                                                               +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +L                                                                                                              Z4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid)
Detected virtualization lxc.
Detected architecture x86-64.

Welcome to Ubuntu 18.04.5 LTS!

Set hostname to <ucf>.
Failed to read AF_UNIX datagram queue length, ignoring: No such file or director                                                                                                              y
File /lib/systemd/system/systemd-journald.service:36 configures an IP firewall (                                                                                                              IPAddressDeny=any), but the local system does not support BPF/cgroup based firew                                                                                                              alling.
Proceeding WITHOUT firewalling in effect! (This warning is only shown for the fi                                                                                                              rst loaded unit using IP firewalling.)
[  OK  ] Reached target Remote File Systems.
system.slice: Failed to reset devices.list: Operation not permitted
[  OK  ] Created slice System Slice.
system-postfix.slice: Failed to reset devices.list: Operation not permitted
[  OK  ] Created slice system-postfix.slice.
[  OK  ] Listening on /dev/initctl Compatibility Named Pipe.
[  OK  ] Listening on udev Kernel Socket.
[  OK  ] Listening on Journal Socket (/dev/log).
[  OK  ] Listening on Syslog Socket.
[  OK  ] Reached target Swap.
system-container\x2dgetty.slice: Failed to reset devices.list: Operation not per                                                                                                              mitted
[  OK  ] Created slice system-container\x2dgetty.slice.
[  OK  ] Reached target User and Group Name Lookups.
system-postgresql.slice: Failed to reset devices.list: Operation not permitted
[  OK  ] Created slice system-postgresql.slice.
[  OK  ] Listening on Journal Socket.
keyboard-setup.service: Failed to reset devices.list: Operation not permitted
         Starting Set the console keyboard layout...
systemd-modules-load.service: Failed to reset devices.list: Operation not permit                                                                                                              ted
         Starting Load Kernel Modules...
systemd-journald.service: Failed to reset devices.list: Operation not permitted
         Starting Journal Service...
systemd-tmpfiles-setup-dev.service: Failed to reset devices.list: Operation not                                                                                                               permitted
         Starting Create Static Device Nodes in /dev...
ufw.service: Failed to reset devices.list: Operation not permitted
         Starting Uncomplicated firewall...
[  OK  ] Started Forward Password Requests to Wall Directory Watch.
user.slice: Failed to reset devices.list: Operation not permitted
[  OK  ] Created slice User and Session Slice.
[  OK  ] Reached target Slices.
[  OK  ] Listening on udev Control Socket.
systemd-udev-trigger.service: Failed to reset devices.list: Operation not permit                                                                                                              ted
         Starting udev Coldplug all Devices...
sys-fs-fuse-connections.mount: Failed to reset devices.list: Operation not permi                                                                                                              tted
[  OK  ] Started Create Static Device Nodes in /dev.
systemd-udevd.service: Failed to reset devices.list: Operation not permitted
         Starting udev Kernel Device Manager...
[  OK  ] Started Load Kernel Modules.
sys-kernel-config.mount: Failed to reset devices.list: Operation not permitted
         Mounting Kernel Configuration File System...
systemd-sysctl.service: Failed to reset devices.list: Operation not permitted
         Starting Apply Kernel Variables...
[  OK  ] Started udev Kernel Device Manager.
[  OK  ] Started Journal Service.
         Starting Flush Journal to Persistent Storage...
[FAILED] Failed to mount Kernel Configuration File System.
See 'systemctl status sys-kernel-config.mount' for details.
[FAILED] Failed to start Create Volatile Files and Directories.
See 'systemctl status systemd-tmpfiles-setup.service' for details.
         Starting Network Name Resolution...
[  OK  ] Reached target System Time Synchronized.
         Starting Update UTMP about System Boot/Shutdown...
[  OK  ] Started Tell Plymouth To Write Out Runtime Data.
[  OK  ] Started Set console font and keymap.
[  OK  ] Started Update UTMP about System Boot/Shutdown.
[  OK  ] Started Network Name Resolution.
[  OK  ] Reached target Host and Network Name Lookups.
[FAILED] Failed to start Uncomplicated firewall.
See 'systemctl status ufw.service' for details.
[  OK  ] Reached target Network.
[  OK  ] Reached target Network is Online.
[FAILED] Failed to start AppArmor initialization.
See 'systemctl status apparmor.service' for details.
         Starting Load AppArmor profiles managed internally by snapd...
[FAILED] Failed to start Snap Daemon.
See 'systemctl status snapd.service' for details.
         Starting Wait until snapd is fully seeded...
[FAILED] Failed to start Postfix Mail Transport Agent (instance -).
See 'systemctl status postfix@-.service' for details.
[  OK  ] Stopped Snap Daemon.
         Starting Snap Daemon...
         Starting Postfix Mail Transport Agent...
[  OK  ] Started Postfix Mail Transport Agent.
[  OK  ] Started Dispatcher daemon for systemd-networkd.
[FAILED] Failed to start Snap Daemon.
See 'systemctl status snapd.service' for details.
[FAILED] Failed to start PostgreSQL Cluster 13-main.
[  OK  ] Stopped Snap Daemon.
         Starting Snap Daemon...
[FAILED] Failed to start Snap Daemon.
See 'systemctl status snapd.service' for details.
[  OK  ] Stopped Snap Daemon.
         Starting Snap Daemon...
[FAILED] Failed to start Snap Daemon.
See 'systemctl status snapd.service' for details.
[  OK  ] Stopped Snap Daemon.
         Starting Snap Daemon...
[FAILED] Failed to start Snap Daemon.
See 'systemctl status snapd.service' for details.
[  OK  ] Stopped Snap Daemon.
[FAILED] Failed to start Snap Daemon.
See 'systemctl status snapd.service' for details.
         Starting Failure handling of the snapd snap...
[  OK  ] Started Failure handling of the snapd snap.

Ubuntu 18.04.5 LTS ucf console
 
Thank you very much!

We tried it and something went wrong and broke the container.
We could not start it anymore and got "unable to parse value of 'net0' - format error".
This issue has been solved by restoring a backup.
Sounds like the configuration file was corrupted. Did you edit/open it while the container was running? If this ever happens again, you can first check the parameters for net0 (maybe it can be fixed without needing a restore).

Now I did something different.
In the nice Proxmox documentation the mapping example is for a 1:1 mapping.

So I changed the uid and gid on the LCX so that on both systems the same user has the same gid and uid.
My new LCX mapping is now:

Code:
lxc.idmap: u 0 100000 1011
lxc.idmap: g 0 100000 1011
lxc.idmap: u 1011 1011 1
lxc.idmap: g 1011 1011 1
lxc.idmap: u 1012 101012 64524
lxc.idmap: g 1012 101012 64524

Now I got a new strange problem.

Without the mapping present the container starts and works fine.
With the mapping present the container starts, and the login console is shown, asking for username and password.
Hoewever, nobody can log in as long the mapping is present.

Why could this be?
Can you also not log in as root?

If so, please start the container with lxc-start -F -n 101 once with and once without the mapping, and compare the two logs. You can also use the attach files buttons to upload such logs.

If you can log in as root, did you create the user in the container without the mapping? Then the home folder and files still have the old ID.
 
Hi,

thank you very much for your reply!

You are right, I think the container was not fully stopped. The fingers were to quick ;-)

Regarding the real problem, even as root there is no login possible.
I attached the requested files.

I can see differences. However, my knowledge is not sufficient to identify the real problem.
What I noticed is, that with the mapping also a ssh-connection does not work.

What I did in the lxc-system to change the uid and gid was the following:
Code:
usermod -u <NEWUID> <LOGIN>   
groupmod -g <NEWGID> <GROUP>
find / -user <OLDUID> -exec chown -h <NEWUID> {} \;
find / -group <OLDGID> -exec chgrp -h <NEWGID> {} \;
usermod -g <NEWGID> <LOGIN>

This has been done before adding the lxc mapping.

Do you have any idea what the problem could be?

Thank you very much.
 

Attachments

Could you also run the container with the mapping with lxc-start -F -n 101 -l DEBUG -o /tmp/lxc-101.log and share the file /tmp/lxc-101.log? This hopefully gives more information about the problem.

You cannot change the ID to the correct one from within the container. This is because the default mapping is present, so the ID from the host's perspective will be 100000 + UIDinstead of the intended UID. You can check this after stopping the container with
Code:
pct mount 101
cd /var/lib/lxc/101/rootfs
ls -ln path/to/a/file/you/chowned/from/whitin/the/container
On the host, you can change the owner to the intended UID. But be aware that the owner from the containers perspective will be nobody without the mapping then. So you might want to wait with that until we know why you cannot login with the mapping. I don't see yet why root login should be affected by re-mapping some other ID...

To be able to start the container again you need to first pct unmount 101.
 
Thank you very much.

Attached you find the requested file.
Here I noted a difference.
Usind the command lxc-start -F -n 101 -l DEBUG -o /tmp/lxc-101.log instead of lxc-start -F -n 101 leads to a console I can login with the mapping present!

I also checked the file system in the way you described and the files show the correct uid and gid 1011.
 

Attachments

I don't see any mention of the id map in this start log, also not the default one. Is your container maybe privileged now that you restored it?

It might also be that lxc-start has not picked up the changed configuration yet. I think you need to start it at least once with pct start <ID> or via the GUI, after editing the configuration. Only then will lxc-start see the changes too.
 
You are right.
Now the attached log looks different! :)

The config file has not changed.
However, in the web-interface it says that this container is privileged.
In the beginning it was for sure set up as an unprivilleged container.

What can I do?
 

Attachments

Last edited:
When you define a mapping for a privilged container the root user is also remapped, so that's why you couldn't log in.

Easiest is to just use a privileged container if the appliance is not security-critical. Otherwise just remove the mapping, backup and then restore as unprivileged again (there is a checkbox when restoring). Then create the mapping again and change the owner for the relevant files while on the host (with pct mount etc.). And then start the container with the GUI or pct start.
 
Fabian, you made my day :D

I restored the container via the command line and missed the "unprivileged" argument.

Now I restored it another time and everything works fine!
 
  • Like
Reactions: fiona

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!