container sshd connection issue after lxc id remapping

sidkang

Member
Oct 18, 2019
2
1
8
33
Hi everyone, I cannot ssh into my lxc container after I remap the id, but I cannot find why. I have searched the web and the issue seems very rear. I use both Debian & Ubuntu LXC, both run into same kind of issue. I am thinking maybe this is because my remapping makes something wrong, but I am unable to locate the fault.
proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve)
pve-manager: 6.0-4 (running version: 6.0-4/2a719255)

LXC SSHD SERVICE LOG
root@test:~# systemctl status sshd
* ssh.service - OpenBSD Secure Shell server
Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2019-10-18 11:17:08 UTC; 13min ago
Process: 144 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=0/SUCCESS)
Main PID: 164 (sshd)
Tasks: 1 (limit: 4915)
CGroup: /system.slice/ssh.service
`-164 /usr/sbin/sshd -D

Oct 18 11:17:07 test systemd[1]: Starting OpenBSD Secure Shell server...
Oct 18 11:17:08 test sshd[164]: Server listening on 0.0.0.0 port 22.
Oct 18 11:17:08 test sshd[164]: Server listening on :: port 22.
Oct 18 11:17:08 test systemd[1]: Started OpenBSD Secure Shell server.
Oct 18 11:17:17 test sshd[320]: fatal: setgroups: Invalid argument [preauth]

The ssh connection issue:
~ ❯❯❯ ssh -p 22 root@192.168.50.205 ✘ 255
ssh: connect to host 192.168.50.205 port 22: Connection refused
~ ❯❯❯ ssh -p 22 root@192.168.50.205 ✘ 255
ssh: connect to host 192.168.50.205 port 22: Connection refused

LXC.conf
arch: amd64
cores: 1
hostname: test
memory: 512
mp0: /hpool/media,mp=/media
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.50.1,hwaddr=02:09:BD:26:24:F7,ip=192.168.50.205/24,type=veth
ostype: ubuntu
rootfs: vm-cache:subvol-105-disk-0,size=8G
swap: 512
unprivileged: 1
lxc.idmap: u 0 100000 1001
lxc.idmap: g 0 100000 1001
lxc.idmap: u 1001 1001 1
lxc.idmap: g 1001 1001 1
lxc.idmap: u 1002 101002 64530
lxc.idmap: g 1002 101002 64530

/etc/subuid & subgid
# /etc/subuid
root:100000:65536
root:1001:1

# /etc/subgid
root:100000:65536
root:1001:1
 
Hi,
try mapping more subordinate IDs (typical 65535) instead of the 64530 you have at the moment
 
Hi,
try mapping more subordinate IDs (typical 65535) instead of the 64530 you have at the moment
Thanks, After a test, 65535 seems a little too large(lxc won't boot), btw, I just happen to change the mapping from 1001 to 1005, and it actually works. I'm a little confused the reason.
 
  • Like
Reactions: rcd
After much trial and error, and trying to read the corresponding parts of https://pve.proxmox.com/wiki/Unprivileged_LXC_containers I came here and found this to be correct.

As the previous user I'd been trying with UID 1000, and got similar errors as above. Changed to 1005 and suddenly it works. Why? It makes no sense. EDIT: after more trial and error I found that it actually works from UID/GID 1004. But why?

Most Wiki pages here are quite clear and well written, but https://pve.proxmox.com/wiki/Unprivileged_LXC_containers is the exception. It is literally unreadable. Could really do with a rewrite and some practical examples.

Yet another edit: Found another explanation here: https://wiki.debian.org/LXC that worked:


/etc/pve/lxc/*.conf
Code:
lxc.idmap: u 0 100000 1000
lxc.idmap: g 0 100000 1000
lxc.idmap: u 1000 1000 1
lxc.idmap: g 1000 1000 1
lxc.idmap: u 1001 101001 64535
lxc.idmap: g 1001 101001 64535

/etc/sub[ug]id
Code:
root:100000:65536
root:1000:1
 
Last edited:
So, next step - I need to link UID 111 and 10 UID's in the range 1000-1910

I'm trying this, but it causes the CT to not start .... and no info anywhere about what is wrong with it....

Code:
lxc.idmap: u 0 100000 111
lxc.idmap: g 0 100000 111
lxc.idmap: u 111 111 1
lxc.idmap: g 111 111 1
lxc.idmap: u 112 100112 1000
lxc.idmap: g 112 100112 1000
lxc.idmap: u 1000 1000 10
lxc.idmap: g 1000 1000 10
lxc.idmap: u 1011 101011 64525
lxc.idmap: g 1011 101011 64525

I search all over Google, Reddit, Proxmox, LXC, but I can't find any concrete answers or examples.
Does anyone actually understand this stuff?
Is there a better way of sharing disk space between a host with a huge zfs array, and virtual machines / containers?
Like maybe just have PVE run off a USB stick or a small disk or something, and just pass all the drives through to a VM and have that handle the ZFS array?
This is quite frustrating.
 
Last edited:
I search all over Google, Reddit, Proxmox, LXC, but I can't find any concrete answers or examples.
Does anyone actually understand this stuff?
have you edited the /etc/subuid and /etc/subgid accordingly? [0]

if you're not sure please post the contents of those files here



EDIT: sorry i haven't seen your other post :)

but the contents of the files look wrong to me.
you need to add the specific IDs you want in there

to answer your other question
Is there a better way of sharing disk space between a host with a huge zfs array, and virtual machines / containers?
Like maybe just have PVE run off a USB stick or a small disk or something, and just pass all the drives through to a VM and have that handle the ZFS array

you can also use lxc.mount.entry (searching in the forum and wiki can give you some examples)

Changed to 1005 and suddenly it works. Why? It makes no sense. EDIT: after more trial and error I found that it actually works from UID/GID 1004. But why?
do you already have users with these uid on your host? cat /etc/passwd

[0]: https://pve.proxmox.com/wiki/Unprivileged_LXC_containers
 
Last edited:
Yes, the sub*id should be ok:

Code:
root:100000:65536
root:1000:10
root:111:1

The content look wrong - you mean the mapping details? Wrong in what way? How would the look to be right?

I simply need to patch through UID's 111, 1000-1010
I have done it in the way of the example on https://pve.proxmox.com/wiki/Unprivileged_LXC_containers that I already linked to, but to be honest, it is really poorly written, I have a very hard time to understand what is meant by it.

Would it be hard to provide a couple of different examples. Like the example I need here?

PS. actually it doesn't seem to matter if the uid exist on host or container. The first example I made worked fine without any user actually having been defined for the UID.
 
Last edited:
Would it be hard to provide a couple of different examples. Like the example I need here?

here take a look at this helper script on github[0]

for example to map a single uid it's simple:

Code:
# Add to /etc/pve/lxc/<container_id>.conf:
lxc.idmap: u 0 100000 111
lxc.idmap: g 0 100000 111
lxc.idmap: u 111 111 1
lxc.idmap: g 111 111 1
lxc.idmap: u 112 100112 65424
lxc.idmap: g 112 100112 65424

# Add to /etc/subuid:
root:111:1

# Add to /etc/subgid:
root:111:1

to map additionally uid 1001:
Code:
# Add to /etc/pve/lxc/<container_id>.conf:
lxc.idmap: u 0 100000 111
lxc.idmap: g 0 100000 111
lxc.idmap: u 111 111 1
lxc.idmap: g 111 111 1
lxc.idmap: u 112 100112 889
lxc.idmap: g 112 100112 889
lxc.idmap: u 1001 1001 1
lxc.idmap: g 1001 1001 1
lxc.idmap: u 1002 101002 64534
lxc.idmap: g 1002 101002 64534

# Add to /etc/subuid:
root:111:1
root:1001:1

# Add to /etc/subgid:
root:111:1
root:1001:1

if you wanted to use the script to generate with the ids you wanted you could do:
./idmapper.py 111 $(seq 1000 1010)

Code:
lxc.idmap: u 0 100000 111
lxc.idmap: g 0 100000 111
lxc.idmap: u 111 111 1
lxc.idmap: g 111 111 1
lxc.idmap: u 112 100112 888
lxc.idmap: g 112 100112 888
lxc.idmap: u 1000 1000 1
lxc.idmap: g 1000 1000 1
lxc.idmap: u 1001 101001 0
lxc.idmap: g 1001 101001 0
lxc.idmap: u 1001 1001 1
lxc.idmap: g 1001 1001 1
lxc.idmap: u 1002 101002 0
lxc.idmap: g 1002 101002 0
lxc.idmap: u 1002 1002 1
lxc.idmap: g 1002 1002 1
lxc.idmap: u 1003 101003 0
lxc.idmap: g 1003 101003 0
lxc.idmap: u 1003 1003 1
lxc.idmap: g 1003 1003 1
lxc.idmap: u 1004 101004 0
lxc.idmap: g 1004 101004 0
lxc.idmap: u 1004 1004 1
lxc.idmap: g 1004 1004 1
lxc.idmap: u 1005 101005 0
lxc.idmap: g 1005 101005 0
lxc.idmap: u 1005 1005 1
lxc.idmap: g 1005 1005 1
lxc.idmap: u 1006 101006 0
lxc.idmap: g 1006 101006 0
lxc.idmap: u 1006 1006 1
lxc.idmap: g 1006 1006 1
lxc.idmap: u 1007 101007 0
lxc.idmap: g 1007 101007 0
lxc.idmap: u 1007 1007 1
lxc.idmap: g 1007 1007 1
lxc.idmap: u 1008 101008 0
lxc.idmap: g 1008 101008 0
lxc.idmap: u 1008 1008 1
lxc.idmap: g 1008 1008 1
lxc.idmap: u 1009 101009 0
lxc.idmap: g 1009 101009 0
lxc.idmap: u 1009 1009 1
lxc.idmap: g 1009 1009 1
lxc.idmap: u 1010 101010 0
lxc.idmap: g 1010 101010 0
lxc.idmap: u 1010 1010 1
lxc.idmap: g 1010 1010 1
lxc.idmap: u 1011 101011 64525
lxc.idmap: g 1011 101011 64525

# Add to /etc/subuid:
root:111:1
root:1000:1
root:1001:1
root:1002:1
root:1003:1
root:1004:1
root:1005:1
root:1006:1
root:1007:1
root:1008:1
root:1009:1
root:1010:1

# Add to /etc/subgid:
root:111:1
root:1000:1
root:1001:1
root:1002:1
root:1003:1
root:1004:1
root:1005:1
root:1006:1
root:1007:1
root:1008:1
root:1009:1
root:1010:1

these config should work but a lot of this is redundant since you can just map 10 of them at once, if they're consecutive ID:

Code:
lxc.idmap = u 0 100000 111
lxc.idmap = g 0 100000 111
# we map 1 uid starting from uid 111 onto 111, so 111 → 111
lxc.idmap = u 111 111 1
lxc.idmap = g 111 111 1
# map the rest until 1000
lxc.idmap: u 112 100112 888
lxc.idmap: g 112 100112 888
# map from 1000 to 1010
lxc.idmap: u 1000 1010 10
lxc.idmap: g 1000 1010 10
# map the rest of ids
lxc.idmap: u 1011 101011 64525
lxc.idmap: g 1011 101011 64525

and for /etc/subuid and /etc/subgid:
Code:
root:111:1
root:1000:10
root:100000:65536

[0]: https://github.com/ddimick/proxmox-lxc-idmapper
 
Actually no, that gave the same error I got before. From PVE GUI: "Error: startup for container '251' failed.

In /var/log/lxc/lxc-monitord.log:

Code:
lxc-monitord 20210222231915.892 INFO     lxc_monitord - cmd/lxc_monitord.c:lxc_monitord_sock_accept:213 - Accepted client file descriptor 7. Number of accepted file descriptors is now 1

/etc/pve/lxc/251.conf
Code:
arch: amd64
cores: 4
hostname: mediaserver
memory: 2048
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=02:0A:25:6D:D5:E7,ip=dhcp,type=veth
ostype: debian
rootfs: local-zfs:subvol-251-disk-0,size=8G
swap: 512
unprivileged: 1
lxc.idmap: u 0 100000 111
lxc.idmap: g 0 100000 111
lxc.idmap: u 111 111 1
lxc.idmap: g 111 111 1
lxc.idmap: u 112 100112 888
lxc.idmap: g 112 100112 888
lxc.idmap: u 1000 1010 10
lxc.idmap: g 1000 1010 10
lxc.idmap: u 1011 101011 64525
lxc.idmap: g 1011 101011 64525

/etc/sub*id
Code:
root:100000:65536
root:1000:10
root:111:1
 
Last edited:
Looks like you're missing a mapping for `1010`.
Either bump the `1000` entries to contain 11 users (1000 through including 1010), or start the next range at `1010`.

Yay for counting from zero ;-)
 
Also, are you sure you want to map the user `1000` to be the user `1010`? If so, I think the `subu/gid` ranges also need to be adapted.
EDIT: Just read the backlog. Yeah you want to change the lines from `x 1000 1010 10` to `x 1000 1000 10` and start the range after it with `1010` and bump the count to 64526
 
Last edited:
Looks like you're missing a mapping for `1010`.
Either bump the `1000` entries to contain 11 users (1000 through including 1010), or start the next range at `1010`.

Yay for counting from zero ;-)
Hehe. At least I was not the only one to fall in that trap ;)
 
Also, are you sure you want to map the user `1000` to be the user `1010`? If so, I think the `subu/gid` ranges also need to be adapted.
EDIT: Just read the backlog. Yeah you want to change the lines from `x 1000 1010 10` to `x 1000 1000 10` and start the range after it with `1010` and bump the count to 64526

Ahh, now I think I understand. So i'ts actually 'x from-id-in-container to-id-in-host number-of-id's' I kinda understood it as 'x from-id-in-container to-id-in-container number-of-id's' -- if that makes sense.

So, I want 111 and 1000-1009 (10 id's ;)) to link between container and host:

lxc.idmap: u 0 100000 111 lxc.idmap: g 0 100000 111 lxc.idmap: u 111 111 1 lxc.idmap: g 111 111 1 lxc.idmap: u 112 100112 888 lxc.idmap: g 112 100112 888 lxc.idmap: u 1000 1000 10 lxc.idmap: g 1000 1000 10 lxc.idmap: u 1010 101011 64525 lxc.idmap: g 1010 101011 64525

Correct like that?

With this I don't need to change /etc/sub*id right?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!