newuidmap: uid range [1100-1101) -> [1100-1101) not allowed

michaelj

Renowned Member
Jun 30, 2016
57
0
71
37
Hi Community,

I'm trying to configure my unprivileged container with id mapping (PVE 6.2).

Following the documentation here : https://pve.proxmox.com/wiki/Unprivileged_LXC_containers, my container failed to start with the error below :

Is there a range that we cannot map ?

In my container, myuser id is the same 1100.

lxc-start: 523: conf.c: lxc_map_ids: 2779 newuidmap failed to write mapping "newuidmap: uid range [1100-1101) -> [1100-1101) not allowed": newuidmap 31321 0 100000 1100 1100 1100 1 1101 101101 64530
lxc-start: 523: start.c: lxc_spawn: 1690 Failed to set up id mapping.
lxc-start: 523: start.c: __lxc_start: 1957 Failed to spawn container "523"
lxc-start: 523: tools/lxc_start.c: main: 308 The container failed to start
lxc-start: 523: tools/lxc_start.c: main: 314 Additional information can be obtained by setting the --logfile and --logpriority options

lxc.idmap: u 0 100000 1100
lxc.idmap: g 0 100000 1100
lxc.idmap: u 1100 1100 1
lxc.idmap: g 1100 1100 1
lxc.idmap: u 1101 101101 64530
lxc.idmap: g 1101 101101 64530

/etc/subuid
myuser:1100:1

/etc/subgid
redmine:1100:1

id myuser
uid=1100(myuser) gid=1100(myuser) groups=1100(myuser)

Regards.
 
Hi Moayad,


arch: amd64
cpulimit: 4
cpuunits: 1024
features: nesting=1
hostname: host1
memory: 3096
mp0: /apps/scripts,mp=/apps/scripts
mp1: /share,mp=/share
nameserver: xxxx
net0: name=eth3,bridge=vmbr2,hwaddr=A2:A9:02:9E:B7:65,ip=172.25.2.7/16,type=veth
onboot: 1
ostype: debian
rootfs: zfs-storage:subvol-523-disk-1,size=23G
searchdomain: vrack
swap: 256
unprivileged: 1
lxc.prlimit.nofile: 65536
lxc.idmap: u 0 100000 1100
lxc.idmap: g 0 100000 1100
lxc.idmap: u 1100 1100 1
lxc.idmap: g 1100 1100 1
lxc.idmap: u 1101 101101 64530
lxc.idmap: g 1101 101101 64530
 
Hello,
UP please.
Bumping/up is generally considered bad style here and will most likely not give you a faster response. Staff is checking up on unanswered threads on a regular basis in case the community could not give an answer.

If you want a guaranteed response in less one day. i suggest you take a look at our Basic or Standard subscriptions[1].


Did you tried different mapping for different user/same user or change mapping in general? - If yes and does not work, please post the output of the following command:

lxc-start -n ID -F -l DEBUG -o /tmp/lxc-ID.log



[1] https://www.proxmox.com/en/proxmox-ve/pricing
 
Hi :)

I am facing the same issue as michaelj and would like to follow up here.

I am trying to get read permissions to a bind mount on a normal user within a LXC container.

My PVE config:
Code:
arch: amd64
cores: 4
hostname: xx
memory: 2048
mp0: /ssd/xx,mp=/xx
net0: xx
net1: xx
ostype: ubuntu
rootfs: zfs_crypt_vmstore:subvol-120-disk-0,size=50G
swap: 2048
unprivileged: 1
lxc.idmap: u 0 100000 1009
lxc.idmap: g 0 100000 1009
lxc.idmap: u 1009 1009 1
lxc.idmap: g 1009 1009 1
lxc.idmap: u 1010 101010 64526
lxc.idmap: g 1010 101010 64526

Debug Output:
Code:
root@pve:/ssd# grep -e ERROR -e WARN /tmp/lxc-ID.log
lxc-start 120 20210316225208.493 ERROR    conf - conf.c:lxc_map_ids:2878 - newuidmap failed to write mapping "newuidmap: uid range [1009-1010) -> [1009-1010) not allowed": newuidmap 29910 0 100000 1009 1009 1009 1 1010 101010 64526
lxc-start 120 20210316225208.493 ERROR    start - start.c:lxc_spawn:1726 - Failed to set up id mapping.
lxc-start 120 20210316225208.493 ERROR    start - start.c:__lxc_start:1999 - Failed to spawn container "120"
lxc-start 120 20210316225208.493 WARN     start - start.c:lxc_abort:1013 - No such process - Failed to send SIGKILL via pidfd 54 for process 29910
lxc-start 120 20210316225208.682 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/unified//lxc.monitor/120/lxc.pivot"
lxc-start 120 20210316225208.682 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/unified//lxc.monitor/120"
lxc-start 120 20210316225208.682 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1103 - Failed to destroy "/sys/fs/cgroup/unified//lxc.monitor/120"
lxc-start 120 20210316225208.682 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/systemd//lxc.monitor/120/lxc.pivot"
lxc-start 120 20210316225208.682 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/systemd//lxc.monitor/120"
lxc-start 120 20210316225208.682 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1103 - Failed to destroy "/sys/fs/cgroup/systemd//lxc.monitor/120"
lxc-start 120 20210316225208.683 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/rdma//lxc.monitor/120/lxc.pivot"
lxc-start 120 20210316225208.683 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/rdma//lxc.monitor/120"
lxc-start 120 20210316225208.683 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1103 - Failed to destroy "/sys/fs/cgroup/rdma//lxc.monitor/120"
lxc-start 120 20210316225208.713 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/cpuset//lxc.monitor/120/lxc.pivot"
lxc-start 120 20210316225208.713 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/cpuset//lxc.monitor/120"
lxc-start 120 20210316225208.713 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1103 - Failed to destroy "/sys/fs/cgroup/cpuset//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/hugetlb//lxc.monitor/120/lxc.pivot"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/hugetlb//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1103 - Failed to destroy "/sys/fs/cgroup/hugetlb//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/cpu,cpuacct//lxc.monitor/120/lxc.pivot"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/cpu,cpuacct//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1103 - Failed to destroy "/sys/fs/cgroup/cpu,cpuacct//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/devices//lxc.monitor/120/lxc.pivot"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/devices//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1103 - Failed to destroy "/sys/fs/cgroup/devices//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/blkio//lxc.monitor/120/lxc.pivot"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/blkio//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1103 - Failed to destroy "/sys/fs/cgroup/blkio//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/freezer//lxc.monitor/120/lxc.pivot"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/freezer//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1103 - Failed to destroy "/sys/fs/cgroup/freezer//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/net_cls,net_prio//lxc.monitor/120/lxc.pivot"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/net_cls,net_prio//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1103 - Failed to destroy "/sys/fs/cgroup/net_cls,net_prio//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/perf_event//lxc.monitor/120/lxc.pivot"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/perf_event//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1103 - Failed to destroy "/sys/fs/cgroup/perf_event//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/pids//lxc.monitor/120/lxc.pivot"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/pids//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1103 - Failed to destroy "/sys/fs/cgroup/pids//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/memory//lxc.monitor/120/lxc.pivot"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/memory//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1103 - Failed to destroy "/sys/fs/cgroup/memory//lxc.monitor/120"
lxc-start 120 20210316225209.650 ERROR    lxc_start - tools/lxc_start.c:main:308 - The container failed to start
lxc-start 120 20210316225209.650 ERROR    lxc_start - tools/lxc_start.c:main:314 - Additional information can be obtained by setting the --logfile and --logpriority options


I also tried the lxc config without the additional 60k uids like this and got the same error.
Code:
lxc.idmap = u 0 100000 1009
lxc.idmap = g 0 100000 1009
lxc.idmap = u 1009 1009 1
lxc.idmap = g 1009 1009 1
 
Hi :)

I am facing the same issue as michaelj and would like to follow up here.

I am trying to get read permissions to a bind mount on a normal user within a LXC container.

My PVE config:
Code:
arch: amd64
cores: 4
hostname: xx
memory: 2048
mp0: /ssd/xx,mp=/xx
net0: xx
net1: xx
ostype: ubuntu
rootfs: zfs_crypt_vmstore:subvol-120-disk-0,size=50G
swap: 2048
unprivileged: 1
lxc.idmap: u 0 100000 1009
lxc.idmap: g 0 100000 1009
lxc.idmap: u 1009 1009 1
lxc.idmap: g 1009 1009 1
lxc.idmap: u 1010 101010 64526
lxc.idmap: g 1010 101010 64526

Debug Output:
Code:
root@pve:/ssd# grep -e ERROR -e WARN /tmp/lxc-ID.log
lxc-start 120 20210316225208.493 ERROR    conf - conf.c:lxc_map_ids:2878 - newuidmap failed to write mapping "newuidmap: uid range [1009-1010) -> [1009-1010) not allowed": newuidmap 29910 0 100000 1009 1009 1009 1 1010 101010 64526
lxc-start 120 20210316225208.493 ERROR    start - start.c:lxc_spawn:1726 - Failed to set up id mapping.
lxc-start 120 20210316225208.493 ERROR    start - start.c:__lxc_start:1999 - Failed to spawn container "120"
lxc-start 120 20210316225208.493 WARN     start - start.c:lxc_abort:1013 - No such process - Failed to send SIGKILL via pidfd 54 for process 29910
lxc-start 120 20210316225208.682 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/unified//lxc.monitor/120/lxc.pivot"
lxc-start 120 20210316225208.682 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/unified//lxc.monitor/120"
lxc-start 120 20210316225208.682 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1103 - Failed to destroy "/sys/fs/cgroup/unified//lxc.monitor/120"
lxc-start 120 20210316225208.682 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/systemd//lxc.monitor/120/lxc.pivot"
lxc-start 120 20210316225208.682 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/systemd//lxc.monitor/120"
lxc-start 120 20210316225208.682 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1103 - Failed to destroy "/sys/fs/cgroup/systemd//lxc.monitor/120"
lxc-start 120 20210316225208.683 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/rdma//lxc.monitor/120/lxc.pivot"
lxc-start 120 20210316225208.683 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/rdma//lxc.monitor/120"
lxc-start 120 20210316225208.683 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1103 - Failed to destroy "/sys/fs/cgroup/rdma//lxc.monitor/120"
lxc-start 120 20210316225208.713 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/cpuset//lxc.monitor/120/lxc.pivot"
lxc-start 120 20210316225208.713 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/cpuset//lxc.monitor/120"
lxc-start 120 20210316225208.713 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1103 - Failed to destroy "/sys/fs/cgroup/cpuset//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/hugetlb//lxc.monitor/120/lxc.pivot"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/hugetlb//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1103 - Failed to destroy "/sys/fs/cgroup/hugetlb//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/cpu,cpuacct//lxc.monitor/120/lxc.pivot"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/cpu,cpuacct//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1103 - Failed to destroy "/sys/fs/cgroup/cpu,cpuacct//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/devices//lxc.monitor/120/lxc.pivot"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/devices//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1103 - Failed to destroy "/sys/fs/cgroup/devices//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/blkio//lxc.monitor/120/lxc.pivot"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/blkio//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1103 - Failed to destroy "/sys/fs/cgroup/blkio//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/freezer//lxc.monitor/120/lxc.pivot"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/freezer//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1103 - Failed to destroy "/sys/fs/cgroup/freezer//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/net_cls,net_prio//lxc.monitor/120/lxc.pivot"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/net_cls,net_prio//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1103 - Failed to destroy "/sys/fs/cgroup/net_cls,net_prio//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/perf_event//lxc.monitor/120/lxc.pivot"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/perf_event//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1103 - Failed to destroy "/sys/fs/cgroup/perf_event//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/pids//lxc.monitor/120/lxc.pivot"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/pids//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1103 - Failed to destroy "/sys/fs/cgroup/pids//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/memory//lxc.monitor/120/lxc.pivot"
lxc-start 120 20210316225208.714 WARN     utils - utils.c:lxc_rm_rf:1895 - Device or resource busy - Failed to delete "/sys/fs/cgroup/memory//lxc.monitor/120"
lxc-start 120 20210316225208.714 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1103 - Failed to destroy "/sys/fs/cgroup/memory//lxc.monitor/120"
lxc-start 120 20210316225209.650 ERROR    lxc_start - tools/lxc_start.c:main:308 - The container failed to start
lxc-start 120 20210316225209.650 ERROR    lxc_start - tools/lxc_start.c:main:314 - Additional information can be obtained by setting the --logfile and --logpriority options


I also tried the lxc config without the additional 60k uids like this and got the same error.
Code:
lxc.idmap = u 0 100000 1009
lxc.idmap = g 0 100000 1009
lxc.idmap = u 1009 1009 1
lxc.idmap = g 1009 1009 1
Did you edit /etc/subuid and /etc/subgid on the host to include the mapped user id you want to use? By default, only user ids 100000-165535 can be mapped. I think this is what the wiki indicates to do, but it wasn't clear to me at first. I thought it was saying to edit subuid & subgid *within* the container, not on the host.
 
  • Like
Reactions: minni
Okay, the documentation on the wiki could be better ;-)

The two main points to solve this problem are:
  1. `/etc/subuid` and `/etc/subgid` have to include all mapped ids for the user `root`, since this is user that starts the LXC container.
    These files are the ones in the proxmox host, not the containers!

  2. In the lxc config you may not map any source number twice, they have to be unique


So my final working solution for a common case, where you want to forward
  • "normal" users (1000-2999) and
  • user-groups (1000-2999) and
  • maybe a single sepecial group "users" (100)
the configs working are:

/etc/pve/nodes/[your-node]/lxc/[id].conf:
Code:
# map the first 1000 (0-999) system users to container-specific ones (100000-100999)
lxc.idmap: u 0 100000 1000

# map normal users (1000-2999) to the container (1000-2999)
lxc.idmap: u 1000 1000 2000

# map special "nobody" system user to container-specific one (165534)
lxc.idmap: u 65534 165534 1


# map the first 100 (0-99) system groups to container-specific ones (100000-100099)
lxc.idmap: g 0 100000 100

# map special group "users" (100) to the container (100)
lxc.idmap: g 100 100 1

# map the additional 899 groups (101-999) system groups to container-specific ones (100100-100999)
lxc.idmap: g 101 100100 899

# map user-groups (1000-2999) to the container (1000-2999)
lxc.idmap: g 1000 1000 2000

# map special group "nogroup" to the container-specific one (165534)
lxc.idmap: g 65534 165534 1


/etc/subgid:
Code:
# container-specific groups
root:100000:65536

# custom user groups 1000-2000
root:1000:2000

# default "users" group 100
root:100:1

/etc/subuid:
Code:
# container users
root:100000:65536

# custom users from 1000-2999
root:1000:2000
 
  • Like
Reactions: lozaim
Okay, the documentation on the wiki could be better ;-)

The two main points to solve this problem are:
  1. `/etc/subuid` and `/etc/subgid` have to include all mapped ids for the user `root`, since this is user that starts the LXC container.
    These files are the ones in the proxmox host, not the containers!

  2. In the lxc config you may not map any source number twice, they have to be unique


So my final working solution for a common case, where you want to forward
  • "normal" users (1000-2999) and
  • user-groups (1000-2999) and
  • maybe a single sepecial group "users" (100)
the configs working are:

/etc/pve/nodes/[your-node]/lxc/[id].conf:
Code:
# map the first 1000 (0-999) system users to container-specific ones (100000-100999)
lxc.idmap: u 0 100000 1000

# map normal users (1000-2999) to the container (1000-2999)
lxc.idmap: u 1000 1000 2000

# map special "nobody" system user to container-specific one (165534)
lxc.idmap: u 65534 165534 1


# map the first 100 (0-99) system groups to container-specific ones (100000-100099)
lxc.idmap: g 0 100000 100

# map special group "users" (100) to the container (100)
lxc.idmap: g 100 100 1

# map the additional 899 groups (101-999) system groups to container-specific ones (100100-100999)
lxc.idmap: g 101 100100 899

# map user-groups (1000-2999) to the container (1000-2999)
lxc.idmap: g 1000 1000 2000

# map special group "nogroup" to the container-specific one (165534)
lxc.idmap: g 65534 165534 1


/etc/subgid:
Code:
# container-specific groups
root:100000:65536

# custom user groups 1000-2000
root:1000:2000

# default "users" group 100
root:100:1

/etc/subuid:
Code:
# container users
root:100000:65536

# custom users from 1000-2999
root:1000:2000
Many thanks sir!
This indeed solved the problem of accessing mapped files from host to guest.

I suggest to mark this post as [SOLVED]
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!