Problems with unprivileged LXC and lxc.idmap

initB10r

New Member
Jan 9, 2023
15
0
1
Hello all,

somehow I do not understand the lxc.idmap.

I have created locally under Proxmox the user lxcdocker with id 1000 and created user 1001 as dockeruser.

My goal is to have root (0) of my lxc docker container running under Proxmox ID 1000 and my docker user 1001 mapped to 1001 under Proxmox.

Now I have done the following mapping:

Code:
unprivileged: 1
lxc.idmap: u 0 1000 1
lxc.idmap: g 0 1000 1
lxc.idmap: u 1000 101000 1
lxc.idmap: g 1000 101000 1
lxc.idmap: u 1001 1001 1
lxc.idmap: g 1001 1001 1
lxc.idmap: u 1002 101002 64535
lxc.idmap: g 1002 101002 64535

With these settings the container will not start, but with the following it will::

Code:
unprivileged: 1
lxc.idmap: u 0 0 1
lxc.idmap: g 0 0 1
lxc.idmap: u 1 100001 1000
lxc.idmap: g 1 100001 1000
lxc.idmap: u 1001 1001 1
lxc.idmap: g 1001 1001 1
lxc.idmap: u 1002 101002 64535
lxc.idmap: g 1002 101002 64535

If I understand it correctly, root is now mapped to root and not to 1000.

I tried "everything", but whenever I want to map root to another ID under proxmox, it doesn't work.

Does anyone have any idea?

Many greetings
Marc
 
What error message do you get when you use this?
lxc.idmap: u 0 1000 1 lxc.idmap: g 0 1000 1 lxc.idmap: u 1 100001 1000 lxc.idmap: g 1 100001 1000 lxc.idmap: u 1001 1001 1 lxc.idmap: g 1001 1001 1 lxc.idmap: u 1002 101002 64535 lxc.idmap: g 1002 101002 64535
You might have to edit /etc/subuid and /etc/subgid as well, but I'm not sure.
 
What error message do you get when you use this?
lxc.idmap: u 0 1000 1 lxc.idmap: g 0 1000 1 lxc.idmap: u 1 100001 1000 lxc.idmap: g 1 100001 1000 lxc.idmap: u 1001 1001 1 lxc.idmap: g 1001 1001 1 lxc.idmap: u 1002 101002 64535 lxc.idmap: g 1002 101002 64535
You might have to edit /etc/subuid and /etc/subgid as well, but I'm not sure.

I only see the following message in proxmox:
Bash:
lxc-console: 902: ../src/lxc/tools/lxc_console.c: main: 129 902 is not running

Is there a place in proxmox where I can get more information why a container is not starting?
 
Hi, you might also need to specify a mapping for some of the system container accounts with uids between 0 and 999 (I am not exactly sure which ones are needed though). Your first mapping does not specify any mappings for container UIDs 1 to 999. Maybe you could also add something like
Code:
lxc.idmap: u 1 100001 999
lxc.idmap: g 1 100001 999
to map container uids/gids 1 to 999 to host uids/gids 100001 to 100999.
 
Last edited:
I only see the following message in proxmox:
Bash:
lxc-console: 902: ../src/lxc/tools/lxc_console.c: main: 129 902 is not running
You probably need to add this to /etc/subuid and /etc/subgid to allow mapping (user and group) 1000 and 1001:
root:1000:1 root:1001:1
Is there a place in proxmox where I can get more information why a container is not starting?
Check journalctl from around the time of trying to start the container.
 
I adjusted the mapping and correctly added the range 1 to 999. Here is my current mapping:
Code:
lxc.idmap: u 0 1000 1
lxc.idmap: g 0 1000 1
lxc.idmap: u 1 100001 999
lxc.idmap: g 1 100001 999
lxc.idmap: u 1000 101000 1
lxc.idmap: g 1000 101000 1
lxc.idmap: u 1001 1001 1
lxc.idmap: g 1001 1001 1
lxc.idmap: u 1002 101002 64535
lxc.idmap: g 1002 101002 64535

Here is my subgid.txt
Code:
root:100000:65536
root:1000:1
root:1001:1

And finally my subuid.txt:
Code:
root:100000:65536
root:1000:1
root:1001:1

journalctl shows the following lines during startup:
Code:
Jan 23 11:59:33 proxmox pvedaemon[1202848]: <root@pam> starting task UPID:proxmox:0012C58E:07249539:63CE6895:vzstart:902:root@pam:
Jan 23 11:59:33 proxmox pvedaemon[1230222]: starting CT 902: UPID:proxmox:0012C58E:07249539:63CE6895:vzstart:902:root@pam:
Jan 23 11:59:34 proxmox systemd[1]: Started PVE LXC Container: 902.
Jan 23 11:59:34 proxmox kernel: EXT4-fs (dm-8): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Jan 23 11:59:34 proxmox audit[1230245]: AVC apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-902_</var/lib/lxc>" pid=1230245 comm="apparmor_parser"
Jan 23 11:59:34 proxmox kernel: audit: type=1400 audit(1674471574.822:386): apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-902_</var/lib/lxc>" pid=1230245 comm="apparmor_parser"
Jan 23 11:59:34 proxmox pvedaemon[1230222]: startup for container '902' failed
Jan 23 11:59:34 proxmox pvedaemon[1202848]: <root@pam> end task UPID:proxmox:0012C58E:07249539:63CE6895:vzstart:902:root@pam: startup for container '902' failed
Jan 23 11:59:35 proxmox audit[1230251]: AVC apparmor="STATUS" operation="profile_remove" profile="/usr/bin/lxc-start" name="lxc-902_</var/lib/lxc>" pid=1230251 comm="apparmor_parser"
Jan 23 11:59:35 proxmox kernel: audit: type=1400 audit(1674471575.074:387): apparmor="STATUS" operation="profile_remove" profile="/usr/bin/lxc-start" name="lxc-902_</var/lib/lxc>" pid=1230251 comm="apparmor_parser"
Jan 23 11:59:35 proxmox pvedaemon[1226300]: unable to get PID for CT 902 (not running?)
Jan 23 11:59:35 proxmox pvedaemon[1230255]: starting lxc termproxy UPID:proxmox:0012C5AF:072495CA:63CE6897:vncproxy:902:root@pam:
Jan 23 11:59:35 proxmox pvedaemon[1202848]: <root@pam> starting task UPID:proxmox:0012C5AF:072495CA:63CE6897:vncproxy:902:root@pam:
Jan 23 11:59:35 proxmox pvedaemon[1217649]: <root@pam> successful auth for user 'root@pam'
Jan 23 11:59:35 proxmox pvedaemon[1202848]: <root@pam> end task UPID:proxmox:0012C5AF:072495CA:63CE6897:vncproxy:902:root@pam: OK
Jan 23 11:59:36 proxmox systemd[1]: pve-container@902.service: Main process exited, code=exited, status=1/FAILURE
Jan 23 11:59:36 proxmox systemd[1]: pve-container@902.service: Failed with result 'exit-code'.

Can someone see the error in the log?
 
Here ist the debug log
Code:
 pct start 902 --debug
lxc_map_ids: 3672 newuidmap failed to write mapping "newuidmap: uid range [1002-65537) -> [101002-165537) not allowed": newuidmap 1238485 0 1000 1 1 100001 999 1000 101000 1 1001 1001 1 1002 101002 64535
lxc_spawn: 1791 Failed to set up id mapping.
__lxc_start: 2074 Failed to spawn container "902"
pe u nsid 1 hostid 100001 range 999
INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2267 - Read uid map: type g nsid 1 hostid 100001 range 999
INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2267 - Read uid map: type u nsid 1000 hostid 101000 range 1
INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2267 - Read uid map: type g nsid 1000 hostid 101000 range 1
INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2267 - Read uid map: type u nsid 1001 hostid 1001 range 1
INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2267 - Read uid map: type g nsid 1001 hostid 1001 range 1
INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2267 - Read uid map: type u nsid 1002 hostid 101002 range 64535
INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2267 - Read uid map: type g nsid 1002 hostid 101002 range 64535
INFO     lsm - ../src/lxc/lsm/lsm.c:lsm_init_static:38 - Initialized LSM security driver AppArmor
INFO     conf - ../src/lxc/conf.c:run_script_argv:337 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "902", config section "lxc"
DEBUG    seccomp - ../src/lxc/seccomp.c:parse_config_v2:656 - Host native arch is [3221225534]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "reject_force_umount  # comment this to allow umount -f;  not recommended"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:524 - Set seccomp rule to reject force umounts
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:524 - Set seccomp rule to reject force umounts
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:524 - Set seccomp rule to reject force umounts
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "[all]"
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "kexec_load errno 1"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[246:kexec_load] action[327681:errno] arch[0]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[246:kexec_load] action[327681:errno] arch[1073741827]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[246:kexec_load] action[327681:errno] arch[1073741886]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "open_by_handle_at errno 1"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[304:open_by_handle_at] action[327681:errno] arch[0]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[304:open_by_handle_at] action[327681:errno] arch[1073741827]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[304:open_by_handle_at] action[327681:errno] arch[1073741886]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "init_module errno 1"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[175:init_module] action[327681:errno] arch[0]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[175:init_module] action[327681:errno] arch[1073741827]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[175:init_module] action[327681:errno] arch[1073741886]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "finit_module errno 1"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[313:finit_module] action[327681:errno] arch[0]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[313:finit_module] action[327681:errno] arch[1073741827]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[313:finit_module] action[327681:errno] arch[1073741886]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "delete_module errno 1"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[176:delete_module] action[327681:errno] arch[0]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[176:delete_module] action[327681:errno] arch[1073741827]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[176:delete_module] action[327681:errno] arch[1073741886]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "ioctl errno 1 [1,0x9400,SCMP_CMP_MASKED_EQ,0xff00]"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:547 - arg_cmp[0]: SCMP_CMP(1, 7, 65280, 37888)
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[16:ioctl] action[327681:errno] arch[0]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:547 - arg_cmp[0]: SCMP_CMP(1, 7, 65280, 37888)
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[16:ioctl] action[327681:errno] arch[1073741827]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:547 - arg_cmp[0]: SCMP_CMP(1, 7, 65280, 37888)
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[16:ioctl] action[327681:errno] arch[1073741886]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "keyctl errno 38"
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[250:keyctl] action[327718:errno] arch[0]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[250:keyctl] action[327718:errno] arch[1073741827]
INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[250:keyctl] action[327718:errno] arch[1073741886]
INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:1017 - Merging compat seccomp contexts into main context
INFO     start - ../src/lxc/start.c:lxc_init:884 - Container "902" is initialized
INFO     cgfsng - ../src/lxc/cgroups/cgfsng.c:cgfsng_monitor_create:1029 - The monitor process uses "lxc.monitor/902" as cgroup
DEBUG    storage - ../src/lxc/storage/storage.c:storage_query:231 - Detected rootfs type "dir"
DEBUG    storage - ../src/lxc/storage/storage.c:storage_query:231 - Detected rootfs type "dir"
INFO     cgfsng - ../src/lxc/cgroups/cgfsng.c:cgfsng_payload_create:1137 - The container process uses "lxc/902/ns" as inner and "lxc/902" as limit cgroup
INFO     start - ../src/lxc/start.c:lxc_spawn:1765 - Cloned CLONE_NEWUSER
INFO     start - ../src/lxc/start.c:lxc_spawn:1765 - Cloned CLONE_NEWNS
INFO     start - ../src/lxc/start.c:lxc_spawn:1765 - Cloned CLONE_NEWPID
INFO     start - ../src/lxc/start.c:lxc_spawn:1765 - Cloned CLONE_NEWUTS
INFO     start - ../src/lxc/start.c:lxc_spawn:1765 - Cloned CLONE_NEWIPC
INFO     start - ../src/lxc/start.c:lxc_spawn:1765 - Cloned CLONE_NEWCGROUP
DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved user namespace via fd 17 and stashed path as user:/proc/1238466/fd/17
DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved mnt namespace via fd 18 and stashed path as mnt:/proc/1238466/fd/18
DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved pid namespace via fd 19 and stashed path as pid:/proc/1238466/fd/19
DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved uts namespace via fd 20 and stashed path as uts:/proc/1238466/fd/20
DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved ipc namespace via fd 21 and stashed path as ipc:/proc/1238466/fd/21
DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved cgroup namespace via fd 22 and stashed path as cgroup:/proc/1238466/fd/22
DEBUG    conf - ../src/lxc/conf.c:idmaptool_on_path_and_privileged:3520 - The binary "/usr/bin/newuidmap" does have the setuid bit set
DEBUG    conf - ../src/lxc/conf.c:idmaptool_on_path_and_privileged:3520 - The binary "/usr/bin/newgidmap" does have the setuid bit set
DEBUG    conf - ../src/lxc/conf.c:lxc_map_ids:3605 - Functional newuidmap and newgidmap binary found
ERROR    conf - ../src/lxc/conf.c:lxc_map_ids:3672 - newuidmap failed to write mapping "newuidmap: uid range [1002-65537) -> [101002-165537) not allowed": newuidmap 1238485 0 1000 1 1 100001 999 1000 101000 1 1001 1001 1 1002 101002 64535
ERROR    start - ../src/lxc/start.c:lxc_spawn:1791 - Failed to set up id mapping.
DEBUG    network - ../src/lxc/network.c:lxc_delete_network:4173 - Deleted network devices
ERROR    start - ../src/lxc/start.c:__lxc_start:2074 - Failed to spawn container "902"
WARN     start - ../src/lxc/start.c:lxc_abort:1039 - No such process - Failed to send SIGKILL via pidfd 16 for process 1238485
startup for container '902' failed

Unfortunately, I do not understand the following line:
lxc_map_ids: 3672 newuidmap failed to write mapping "newuidmap: uid range [1002-65537) -> [101002-165537) not allowed": newuidmap 1238485 0 1000 1 1 100001 999 1000 101000 1 1001 1001 1 1002 101002 64535
lxc_spawn: 1791 Failed to set up id mapping.
 
Last edited:
you are mapping the user 65537 to the user 165537, which is not allowed according to your subuid file..
 
Now it works. I changed the last two lines from 64535 to 64534.

Here is the complete mapping:
Code:
lxc.idmap: u 0 1000 1
lxc.idmap: g 0 1000 1
lxc.idmap: u 1 100001 999
lxc.idmap: g 1 100001 999
lxc.idmap: u 1000 101000 1
lxc.idmap: g 1000 101000 1
lxc.idmap: u 1001 1001 1
lxc.idmap: g 1001 1001 1
lxc.idmap: u 1002 101002 64534
lxc.idmap: g 1002 101002 64534

1002 + 64535 = 65537.

is of course too high.

But then why had this configuration worked?

Code:
lxc.idmap: u 0 0 1
lxc.idmap: g 0 0 1
lxc.idmap: u 1 100001 1000
lxc.idmap: g 1 100001 1000
lxc.idmap: u 1001 1001 1
lxc.idmap: g 1001 1001 1
lxc.idmap: u 1002 101002 64535
lxc.idmap: g 1002 101002 64535
 
Last edited:
It's crazy, after the problem from above is fixed, I can't login to the container as root anymore. If I remove the mapping, everything works normally again.

Does anyone have any idea what this could be caused by?
 
It's crazy, after the problem from above is fixed, I can't login to the container as root anymore. If I remove the mapping, everything works normally again.

Does anyone have any idea what this could be caused by?
The root user is normally 100000 but you remapped it to 1000. How do you login? Does pct enter still work?
Maybe the files of the original root (100000) are not accessible by the new root (1000). I'm not sure if the templates can deal with that. You should be able to remap the files from the host, but I don't have experience on how to do that easily for all files.

EDIT: For example, maybe the 1000 user cannot read the .ssh directory of root and cannot login via SSH.
 
Last edited:
  • Like
Reactions: fweber
Seems I haven't understood the mapping correctly yet. I have always thought that the container users are mapped in the direction to proxmox.

So root in the container always has the ID=0 and by mapping root writes for example in proxmox as ID=1000, if I have the following mapping:

lxc.idmap: u 0 1000 1
lxc.idmap: g 0 1000 1

But if I login with "pct enter 102" and the mapping above, then a "ls -n" looks like this:
Code:
root@dockerTest3:/# ls -n
total 57
lrwxrwxrwx   1 65534 65534   7 Apr 24  2022 bin -> usr/bin
drwxr-xr-x   2 65534 65534   2 Apr 18  2022 boot
drwxr-xr-x   6     0     0 480 Jan 24 10:36 dev
drwxr-xr-x  69 65534 65534 149 Jan 24 10:36 etc
-rw-r--r--   1     0     0   0 Jan 24 10:36 fastboot
drwxr-xr-x   2 65534 65534   2 Apr 18  2022 home
lrwxrwxrwx   1 65534 65534   7 Apr 24  2022 lib -> usr/lib
lrwxrwxrwx   1 65534 65534   9 Apr 24  2022 lib32 -> usr/lib32
lrwxrwxrwx   1 65534 65534   9 Apr 24  2022 lib64 -> usr/lib64
lrwxrwxrwx   1 65534 65534  10 Apr 24  2022 libx32 -> usr/libx32
drwxr-xr-x   2 65534 65534   2 Apr 24  2022 media
drwxr-xr-x   2 65534 65534   2 Apr 24  2022 mnt
drwxr-xr-x   2 65534 65534   2 Apr 24  2022 opt
dr-xr-xr-x 421 65534 65534   0 Jan 24 10:36 proc
drwx------   2 65534 65534   4 Apr 24  2022 root
drwxr-xr-x   9     0     0 280 Jan 24 10:36 run
lrwxrwxrwx   1 65534 65534   8 Apr 24  2022 sbin -> usr/sbin
drwxr-xr-x   2 65534 65534   2 Apr 24  2022 srv
dr-xr-xr-x  13 65534 65534   0 Jan 24 10:36 sys
drwxrwxrwt   8 65534 65534   8 Jan 24 10:36 tmp
drwxr-xr-x  14 65534 65534  14 Apr 24  2022 usr
drwxr-xr-x  11 65534 65534  13 Apr 24  2022 var


If I remove the mapping, the "ls -n" looks like this:
Code:
root@dockerTest3:/# ls -n
total 57
lrwxrwxrwx   1     0     0   7 Apr 24  2022 bin -> usr/bin
drwxr-xr-x   2     0     0   2 Apr 18  2022 boot
drwxr-xr-x   6     0     0 480 Jan 24 10:38 dev
drwxr-xr-x  69     0     0 150 Jan 24 10:38 etc
drwxr-xr-x   2     0     0   2 Apr 18  2022 home
lrwxrwxrwx   1     0     0   7 Apr 24  2022 lib -> usr/lib
lrwxrwxrwx   1     0     0   9 Apr 24  2022 lib32 -> usr/lib32
lrwxrwxrwx   1     0     0   9 Apr 24  2022 lib64 -> usr/lib64
lrwxrwxrwx   1     0     0  10 Apr 24  2022 libx32 -> usr/libx32
drwxr-xr-x   2     0     0   2 Apr 24  2022 media
drwxr-xr-x   2     0     0   2 Apr 24  2022 mnt
drwxr-xr-x   2     0     0   2 Apr 24  2022 opt
dr-xr-xr-x 419 65534 65534   0 Jan 24 10:38 proc
drwx------   2     0     0   4 Apr 24  2022 root
drwxr-xr-x  12     0     0 380 Jan 24 10:38 run
lrwxrwxrwx   1     0     0   8 Apr 24  2022 sbin -> usr/sbin
drwxr-xr-x   2     0     0   2 Apr 24  2022 srv
dr-xr-xr-x  13 65534 65534   0 Jan 24 10:38 sys
drwxrwxrwt  10     0     0  10 Jan 24 10:38 tmp
drwxr-xr-x  14     0     0  14 Apr 24  2022 usr
drwxr-xr-x  11     0     0  13 Apr 24  2022 var

I do not understand this behavior. According to this, the mapping somehow changes the permissions in the container.

Is this correct or is it still completely different?
Sorry for the maybe stupid questions, but I'm still totally new with lxc containers and still experimenting how best to set up a lxc Docker container to store all the data of Docker's own containers on my NAS.
 
the mapping does not change the ownership on the filesystem (it just changes the "view" or context of the container processes). so what you are seeing is files/dirs owned by non-mapped users being translated to nobody/nogroup.

I would recommend not messing around with manual idmapping if you don't understand the underlying mechanisms (yet).
 
I don't want to mess around with idmapping, but when I use lxc container I want to understand it, but unfortunately I haven't found an explanation for me yet.
I have now created a user with id 1000 under proxmox and in the lxc container, since my container does a mapping from root (0) to 1000:
Code:
lxc.idmap: u 0 1000 1
lxc.idmap: g 0 1000 1
lxc.idmap: u 1 100000 999
lxc.idmap: g 1 100000 999
lxc.idmap: u 1000 101000 1
lxc.idmap: g 1000 101000 1
lxc.idmap: u 1001 1001 2
lxc.idmap: g 1001 1001 2
lxc.idmap: u 1003 101002 64534
lxc.idmap: g 1003 101002 64534
Now when I do a "ls -n" in the container, I get the following output:
Code:
lrwxrwxrwx   1     1     1   7 Apr 24  2022 bin -> usr/bin
drwxr-xr-x   2     1     1   2 Apr 18  2022 boot
drwxr-xr-x   6     0     0 480 Feb  5 14:18 dev
drwxr-xr-x  69     1     1 152 Feb  5 14:18 etc
drwxr-xr-x   2     1     1   2 Apr 18  2022 home
lrwxrwxrwx   1     1     1   7 Apr 24  2022 lib -> usr/lib
lrwxrwxrwx   1     1     1   9 Apr 24  2022 lib32 -> usr/lib32
lrwxrwxrwx   1     1     1   9 Apr 24  2022 lib64 -> usr/lib64
lrwxrwxrwx   1     1     1  10 Apr 24  2022 libx32 -> usr/libx32
drwxr-xr-x   2     1     1   2 Apr 24  2022 media
drwxr-xr-x   2     1     1   2 Apr 24  2022 mnt
drwxr-xr-x   2     1     1   2 Apr 24  2022 opt
dr-xr-xr-x 338 65534 65534   0 Feb  5 14:18 proc
drwx------   3     1     1   6 Jan 24 11:40 root
drwxr-xr-x   9     0     0 280 Feb  5 14:19 run
lrwxrwxrwx   1     1     1   8 Apr 24  2022 sbin -> usr/sbin
drwxr-xr-x   2     1     1   2 Apr 24  2022 srv
dr-xr-xr-x  13 65534 65534   0 Feb  5 14:18 sys
drwxrwxrwt   4     0     0   4 Feb  5 14:18 tmp
drwxr-xr-x  14     1     1  14 Apr 24  2022 usr
drwxr-xr-x  11     1     1  13 Apr 24  2022 var
Now why have all root folders been mapped to deamon (1)?
I would really appreciate if someone tries to explain this to me, as I would like to understand it.
 
Here's my theory: As fabian explained, the mapping does not change the ownership on the filesystem. The filesystem was created under the assumption that the container root is mapped to host uid 100000, so e.g. /boot is owned by uid 100000 on the filesystem. Now, the third idmap line maps container uid 1 and host uid 100000. /boot is still owned by uid 100000 on the filesystem, but from the container's view, /boot is now owned by uid 1.
You can check ownerships on the filesystem level (without the mappings getting in the way) using pct mount.
 
Thanks fweber.

If I understand it correctly, then the container is created in the HOST file system with UID 100000.
When the container is started, the mapping then tells the container that all files of 100000 in the container are to be considered 0.

If then my mapping from 100000 to 1 takes place, then it will correctly show as demon in the container.

Can I also look at the files from the container on the HOST itself via "ls" to make it easier to understand?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!