Yet another UID/GID remapping help thread.

Zephyrs

New Member
May 5, 2021
15
0
1
I'm trying to figure out UID/GID remapping to access mounts from within an unprivileged container. I believe I am doing everything according to the wiki, but I'm still getting odd behavior. System configuration is as follows:
  • pve 6.4. Fully updated
  • ZFS is used as root
  • ZFS is used on several volumes of various drives/ssds and mounted generically in the root file system as /testing /bulk and so on.
  • unprivileged container has a mount point set with mp0: directory/on/ZFS/pool,mp=/arbitrary/directory in the /ect/pve/lxc/###.conf
  • unprivileged ubuntu container has user created with useradd -u 1000 -m worker

Everything works as expected at this point. The user can access their normal /home/worker directory and write to it. I can read the mount points, but can't write to them.

Adding the following should remap the user to read/write the mount directories. In the container conf file I add:
Code:
lxc.idmap: u 0 100000 1000
lxc.idmap: g 0 100000 1000
lxc.idmap: u 1000 1000 1
lxc.idmap: g 1000 1000 1
lxc.idmap: u 1001 101001 64535
lxc.idmap: g 1001 101001 64535
Add root:1000:1 to /etc/subgid and /etc/subuid.
chown -R 1000:1000 /mounted/folder

I still cannot write to the mounted directories. I went back and commented out the idmaps and chowned the folder to the 101000 ID number and everything worked, but this does not appear to be an ideal solution as things get more complicated down the road.

Furthermore, while the remaps are active, the remapped user can no longer write to their own home directory, which leads to interesting errors/warnings with tools like nano.
Code:
Unable to create directory /home/worker/.local/share/nano/: Permission denied
It is required for saving/loading search history or cursor positions.

Any information would be appreciated. If there's a better way to allow unprivileged containers to read/write mounted directories, I'd like to know.
 
Are you logged in as worker when attempting to access the directory?
Also, do you have any users on the host system which may also be mapped to uid:gid 1000? This UID is generally where system user ids start on a Debian system.
Finally, are you sure that directory has write permissions throughout?
 
Yes. I created the container, added the mount points in the conf file, started the container, created the user, SUed to the container and tested. /home/user directory worked fine. I then created some files, and read some other ones. All was working fine. Navigated to /mountpoint, and could read files (again not unexpected). Then I shutdown the container, and added the uid remappings and chowned the directories being mounted. I still couldn't write to the mounts, plus the /home directory became unwritable. Chowning the home directories to 1000:1000 didn't work either.

This is on a clean installation of pve 6.4. There are no other users created. This is the passwd file contents on the host system.
Code:
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
systemd-timesync:x:100:102:systemd Time Synchronization,,,:/run/systemd:/usr/sbin/nologin
systemd-network:x:101:103:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin
systemd-resolve:x:102:104:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin
_apt:x:103:65534::/nonexistent:/usr/sbin/nologin
messagebus:x:104:111::/nonexistent:/usr/sbin/nologin
sshd:x:105:65534::/run/sshd:/usr/sbin/nologin
_rpc:x:106:65534::/run/rpcbind:/usr/sbin/nologin
postfix:x:107:113::/var/spool/postfix:/usr/sbin/nologin
statd:x:108:65534::/var/lib/nfs:/usr/sbin/nologin
gluster:x:109:116::/var/lib/glusterd:/usr/sbin/nologin
ceph:x:64045:64045:Ceph storage service:/var/lib/ceph:/usr/sbin/nologin
systemd-coredump:x:999:999:systemd Core Dumper:/:/usr/sbin/nologin
As you can see, there is no user 1000. Should there be? This is not implied on the wiki from my understanding.

The directory is perfectly writable from the host system. I can write to it from within the container if I comment out the mappings and chown it to 101000:101000. I can write to it as root if I chown it to 100000:100000. There's no weird existing filesystem with peculiar permissions or anything. It's nothing more than a junker hard drive I have that I wiped and mounted with zpool add /dev/sd*.

EDIT:
Complete XXX.conf file
Code:
# uid map%3A from uid 0 map 1000 uids (in the ct) to the range starting 100000 (on the host), so 0..1004 (ct) %E2%86%92 100000..100999 (host)
# we map 1 uid starting from uid 1000 onto 1000, so 1000 %E2%86%92 1000
# we map the rest of 65535 from 1001 upto 101001, so 1001..65535 %E2%86%92 101001..165535
arch: amd64
cores: 8
hostname: Test.Box
memory: 32768
mp0: /testing,mp=/test
mp1: /bulk,mp=/bulk
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=32:BE:DE:C7:3B:EE,ip=dhcp,type=veth
ostype: ubuntu
rootfs: local-zfs:subvol-900-disk-0,size=4G
swap: 512
unprivileged: 1
lxc.idmap: u 0 100000 1000
lxc.idmap: g 0 100000 1000
lxc.idmap: u 1000 1000 1
lxc.idmap: g 1000 1000 1
lxc.idmap: u 1001 101001 64535
lxc.idmap: g 1001 101001 64535

Rather annoying that proxmox sees fit to reorder the file contents. I like whitespace and comments for readability.
 
Last edited:
As you can see, there is no user 1000. Should there be? This is not implied on the wiki from my understanding.
No the Proxmox VE install doesn't create any additional system users. I asked just in case you have done this yourself.

Could you post the output of ls -na on the mountpoint from the host and from within the container?
 
Relevant lines rom within the host:
Code:
drwxr-xr-x   2 101000 101000   11 May  4 22:07 bulk
drwxr-xr-x   2   1000   1000    3 May  5 00:30 test
Complete host:
Code:
drwxr-xr-x  20      0      0   26 May  2 22:21 .
drwxr-xr-x  20      0      0   26 May  2 22:21 ..
lrwxrwxrwx   1      0      0    7 Apr 27 07:22 bin -> usr/bin
drwxr-xr-x   5      0      0   11 May  2 21:51 boot
drwxr-xr-x   2 101000 101000   11 May  4 22:07 bulk
drwxr-xr-x  18      0      0 4760 May  3 14:45 dev
drwxr-xr-x  88      0      0  175 May  5 06:17 etc
drwxr-xr-x   2      0      0    2 Mar 19 19:44 home
lrwxrwxrwx   1      0      0    7 Apr 27 07:22 lib -> usr/lib
lrwxrwxrwx   1      0      0    9 Apr 27 07:22 lib32 -> usr/lib32
lrwxrwxrwx   1      0      0    9 Apr 27 07:22 lib64 -> usr/lib64
lrwxrwxrwx   1      0      0   10 Apr 27 07:22 libx32 -> usr/libx32
drwxr-xr-x   2      0      0    2 Apr 27 07:22 media
drwxr-xr-x   3      0      0    3 Apr 27 07:22 mnt
drwxr-xr-x   2      0      0    2 Apr 27 07:22 opt
dr-xr-xr-x 713      0      0    0 May  2 22:04 proc
drwx------   5      0      0   11 May  4 00:22 root
drwxr-xr-x   4      0      0    4 May  2 21:50 rpool
drwxr-xr-x  28      0      0 1280 May  5 00:30 run
lrwxrwxrwx   1      0      0    8 Apr 27 07:22 sbin -> usr/sbin
drwxr-xr-x   2      0      0    2 Apr 27 07:22 srv
dr-xr-xr-x  13      0      0    0 May  2 22:04 sys
drwxr-xr-x   2   1000   1000    3 May  5 00:30 test
drwxrwxrwt   8      0      0    8 May  5 11:09 tmp
drwxr-xr-x  13      0      0   13 Apr 27 07:22 usr
drwxr-xr-x  11      0      0   13 Apr 27 07:22 var

From within the container as root (I removed the /bulk mapping from the conf file. I can re-add it if you want.
Code:
dr-xr-xr-x 712 65534 65534   0 May  5 06:12 proc
dr-xr-xr-x  13 65534 65534   0 May  5 06:12 sys
drwxr-xr-x   2 65534 65534   3 May  5 04:30 test
From with the container as worker
Code:
dr-xr-xr-x 721 65534 65534   0 May  5 06:12 proc
dr-xr-xr-x  13 65534 65534   0 May  5 06:12 sys
drwxr-xr-x   2 65534 65534   3 May  5 04:30 test
Complete container as root
Code:
drwxr-xr-x  23     0     0  23 May  5 06:12 .
drwxr-xr-x  23     0     0  23 May  5 06:12 ..
drwxr-xr-x   2     0     0 149 May  5 05:27 bin
drwxr-xr-x   2     0     0   2 Apr 15  2020 boot
drwxr-xr-x   5     0     0 460 May  5 06:12 dev
drwxr-xr-x  76     0     0 161 May  5 06:12 etc
drwxr-xr-x   3     0     0   3 May  5 05:32 home
drwxr-xr-x  15     0     0  16 Apr 25  2020 lib
drwxr-xr-x   2     0     0   3 May  5 05:27 lib64
drwxr-xr-x   2     0     0   2 Apr 25  2020 media
drwxr-xr-x   2     0     0   2 Apr 25  2020 mnt
drwxr-xr-x   2     0     0   2 Apr 25  2020 opt
dr-xr-xr-x 712 65534 65534   0 May  5 06:12 proc
drwx------   4     0     0   7 May  5 06:18 root
drwxr-xr-x  12     0     0 400 May  5 06:13 run
drwxr-xr-x   2     0     0 130 May  5 05:27 sbin
drwxr-xr-x   2     0     0   2 Apr 25  2020 srv
dr-xr-xr-x  13 65534 65534   0 May  5 06:12 sys
drwxr-xr-x   2 65534 65534   3 May  5 04:30 test
drwxrwxrwt   9     0     0   9 May  5 15:48 tmp
drwxr-xr-x  10     0     0  10 Apr 25  2020 usr
drwxr-xr-x  11     0     0  13 Apr 25  2020 var

Out of curiosity I spun up another container real quick and chowned /bulk to 100000. That shows up as
Code:
drwxr-xr-x   2     0     0  11 May  5 02:07 bulk
and works for read writes from within the container.
 
Partial update. Using adduser instead of useradd seems to resolve the inability to write to the mounted directories. I haven't bothered to dig too deeply into what is missing when using useradd, but at least that is resolved now.

The remapped users still cannot access directories within the container while remapped though.

Is this intended behavior? I suppose I could chown those on the host to the UID/GID of the user, but this would seem to defeat much of the purpose/benefit of containerization/remaps.
 
From within the container as root (I removed the /bulk mapping from the conf file. I can re-add it if you want.
The test directory should also display uid 1000 inside the container. This typically happens after setting the appropriate mapping and rebooting the container.

I can write to it from within the container if I comment out the mappings and chown it to 101000:101000. I can write to it as root if I chown it to 100000:100000.
This suggests that the mapping is not effective, as the container user 1000, is being mapped to 101000. You could perhaps try to run usermod -u 1000 worker as root in the container, but seeing as it didn't work on initial setup, I'm unsure if it'll work here.

If it doesn't, could you try to repeat the instructions again, but without adding a backing user in the container? So just append the mapping in the config file, append root 1000:1 to /etc/subuid and /etc/subgid, change the owner of the mountpoint on the host (chown -R 1000:1000 /mnt/point), and finally reboot the container.
After this, run ls -n in the container again, to see if the owner and group have changed to 1000.

Using adduser instead of useradd seems to resolve the inability to write to the mounted directories
I'm also not sure on this, off the top of my head. I just know that adduser is the perferred user-level tool, as it handles things like uid verification and home directory set up automatically. It still calls useradd to create the user.


Is this intended behavior? I suppose I could chown those on the host to the UID/GID of the user, but this would seem to defeat much of the purpose/benefit of containerization/remaps.
And this would defeat the purposes of remapping.
 
And this would defeat the purposes of remapping.

If I configure the remap and then create the user, the user appears to be created with the remapped GID/UID within the container's filesystem within rpool. That's effectively the same thing as simply chowning it from the host to 1000:1000 (or whatever the GID/UID is).

I'm not really seeing the point of the remapping in the first place if there's no apparent segregation going on. This entire process seems rather cobbled together coming from the perspective of dockeresque environments where I can just add working directories in a config and not worry about keeping track of a bunch of complex tables. I realize that LXC containers are far more feature rich, but it's rather surprising that something as basic as giving multiple containers access to a shared directory is so convoluted.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!