proxmox container uid/gid mapping

RolandK

Renowned Member
Mar 5, 2019
876
165
68
50
hello,

i'm new to proxmox containers.

i want to virtualise pbs into a container and map a host directory to the container.

i successfully did that mapping, but i have problems with directory permissions / uid mapping.

it seem it's a little bit complex/complicated , and while reading into it, i came across this:

https://linuxcontainers.org/lxd/docs/latest/userns-idmap/

i seems raw.idmap is what i need, but it seems this is not available to proxmox containers?

is there an easy/elegant way to tell a specific proxmox container to map container uid/gid 0-2000 (i.e. the id's used within the container) to uid/gid 200000-202000 on the host ?

i tried that with the way described at https://pve.proxmox.com/wiki/Unprivileged_LXC_containers , but it does not work, after setting appropriate options, i cannot login into the container anymore
 
By default the users 0-2000 are mapped to 100000-102000. Sounds like you sucessfully changed their mapping to 200000-202000 (do you really need to remap the virtual root user and 1999 other users?). However, files that were owned by users 0-2000 are still owned by 100000-102000 on the filesystem(s) as seen from the Proxmox host. Unfortunately, those users are no longer mapped and their files are now probably all nobody:nogroup. You probably need to chown all those files (via the Proxmox host) to the new users (as seen from the Proxmox host), but I have limited experience with this as I usually only manually map a few groups and not thousands of users.
 
  • Like
Reactions: Dunuin
Usually remapping UID 34 to 34 instead of 34 to 100034 should be enough for the datastore. So:
1.) install new unprivileged Debian 11 container
2.) edit the LXCs config file (/etc/pve/lxc/YourVMID.conf) and add the lines:
Code:
# map uid/gid 0 to 33 to 100000 to 100033
lxc.idmap = u 0 100000 34
lxc.idmap = g 0 100000 34
# map uid/gid 34 to 34 (thats the "backup" user thats needs full access to the PBS datastore)
lxc.idmap = u 34 34 1
lxc.idmap = g 34 34 1
# map uid/gid 35 to 65535
lxc.idmap = u 35 100035 65501
lxc.idmap = g 35 100035 65501
# bind-mount folder from host to lxc
mp0: /path/on/host,mp=/path/inside/lxc
3.) edit hosts /etc/subuid and add the line: root:34:1
4.) edit hosts /etc/subgid and add the line: root:34:1
5.) change the owner of the folder you want to bind-mount to UID 34: chown -R 34:34 /path/to/your/folder
6.) make sure that the LXC isn't started and mount the root filesystem of the LXC on the PVe host (see the "pve mount" command)
7.) find all files and folders of that mounted filesystem that are owned by UID/GID 100034 and chown them to UID 34 (this is a bit annoying...you might want to google for some script that can do that for you...maybe something similar to https://forum.proxmox.com/threads/c...emapping-to-privileged-lxc.120461/post-553966)
9.) unmount it ("pct unmount" command) and reboot the PVE server (not sure what services would be needed to restart so the config changes take into effect)
10.) start the LXC and install the PBS like described here: https://pbs.proxmox.com/docs/installation.html#install-proxmox-backup-server-on-debian
11.) make sure you don't forget to enable the LXCs "nesting" feature
12.) ls -la should now show the bind-mounted folder on the host as owned by UID 34 and inside the LXC owned by user "backup". If it is owned by user "nobody" there is something wrong.

Edit:
Forgot the bind-mount...added it to step 2
 
Last edited:
I run a PBS in a container and I didn't need to do any mapping. Just make sure the container user (100034) have the necessary permissions on the directory used for the datastore, which I mounted on the host and bind-mounted on the container using lxc.mount.entry: /srv/pbs srv/pbs none bind 0 0.
 
  • Like
Reactions: Dunuin
I run a PBS in a container and I didn't need to do any mapping. Just make sure the container user (100034) have the necessary permissions on the directory used for the datastore, which I mounted on the host and bind-mounted on the container using lxc.mount.entry: /srv/pbs srv/pbs none bind 0 0.
Right...simply chowning the folder to UID 100034 should work too...didn't thought of that...too simple ^^
 
thank you very much for your help.

but why is this so complicated?

why can't i just tell to the appropriate container that i want to map uids 0-2000 to 200000-202000 on the host ? (and another container to map 0-2000 to 300000-302000 ) ?

lxc does seem to have support for this !?
 
Last edited:
You can map that. But this is just mapping, it won't change the owner of all the existing files/folders. Lets say by default with 0-65535 to 100000-165535 remapping everything owned by UID 0-2000 inside the LXC was actually owned by UID 100000-102000 on the root filesystem. Now you change the remapping from from 0-2000 to 200000-202000 + 2001-65535 to 102001-165535. All the ten-thousands of existing files and folders on the root filesystem are still owned by UID 100000 to 102000. But now UID 100000-102000 isn't mapped anymore, so the files and folders now are owned by "nobody". To fix things you would need to chown everything owned by UID 100000-102000 to UID 200000-202000 so that it is then owned again by mapped UID 0-2000 inside the LXC.

You can shuffle the name signs next to the door bells as much as you like. this won't change the residents. The flats are still owned by the same people, until you force them to switch the flats to match the signs again. ;)

And yes...unprivileged LXCs are really that annoying...
 
Last edited:
  • Like
Reactions: RolandK
i wasn't aware that the container image would contain the hosts view of the uids/gids in the fs and not the containers view.

that explains, why my container didn't work anymore after remap the whole range.

i came across https://á.se/chown-shifting-subuid-and-subgid-like-a-bash-ninja/ and currently experiment with changing the uids/gids in the container image

did i read several times, that when changing container priviledged<->unpriviledged that there was that "backup->restore" trick to make it work?

why not adding a script like this for support instead of let people shuffle around data....!?

i have now some easy mapping strategy , which satisfies me to some degree:

1. change uids/gids in the container image with the script above (need to remove symlink skipping and add -h to chown , so it would also chown symlinks appropriately)
2. add lxc.idmap: u 0 200000 65536 and lxc.idmap: g 0 200000 65536. to the container configuration
3. add root:200000:65536 to subuid and subgid file

there remains only one problem:

/proc and /sys bind mounts in the container are now nobdody:nogroup ( 65534 ) instead of root

still searching, how to adjust that they are root inside the container, like with default configuration.


when mapping defaults to 100000+ on the host, why is root from the host remapped to 65534 in the container when container remapped to 200000+ ?


oh, i must have looked wrong, i setup 2 fresh containers with debian11+12 and for both, proc and sys are mapped to nobody, too. so all is fine
 
Last edited:
  • Like
Reactions: RolandK
i hardly dislike doing a backup/restore of container images/files as a general approach for nothing but changing file ownership.

what if that container is terabytes large?

i think some sophisticated helper script which could also save and restore ownership information and shift ownership uids would be better
 
Last edited:
i think some sophisticated helper script which could also save and restore ownership information and shift ownership uids would be better
Given the threads on this forum, I agree. User/group mappings from unprivileged containers is always complex and error prone. Would be nice to have a script or GUI but it does not bother me enough to start working on it.
 
Last edited:
  • Like
Reactions: RolandK
Given the threads on this forum, I agree. User/group mappings from unprivileged containers is always complex and error prone. Would be nice to have a script or GUI but it does not bother me enough to start working on it.
Jup. A lot of beginners buy some weak hardware and then want to use LXCs and are totally overwhelmed by all the isolation mechanisms like user remapping. Wouldn't be bad to have some kind of GUI or wizard integrated, even if it is just for reducing the amount of community support requests.
 
  • Like
Reactions: RolandK

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!