Podman in rootless mode on LXC container

marceliszpak

New Member
Jul 27, 2023
3
0
1
I have a problem with starting podman as a non-root user on LXC. I have tried it on Debian and Fedora based LXC without success. Installation gone fine, on root account works fine but every podman command from non-root account ends with:

cockpit@Test:~$ podman info
ERRO[0000] running `/usr/bin/newuidmap 4852 0 1000 1 1 165536 65536`: newuidmap: write to uid_map failed: Operation not permitted
Error: cannot set up namespace using "/usr/bin/newuidmap": exit status 1

on root account

cockpit@Test:~$ sudo podman info
[sudo] password for cockpit:
host:
arch: amd64
buildahVersion: 1.28.2
cgroupControllers:
- cpu
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon_2.1.6+ds1-1_amd64
path: /usr/bin/conmon
version: 'conmon version 2.1.6, commit: unknown'
cpuUtilization:
idlePercent: 98.59
systemPercent: 0.41
userPercent: 1
cpus: 2
distribution:
codename: bookworm
distribution: debian
version: "12"
eventLogger: journald
hostname: Test
idMappings:
gidmap: null
uidmap: null
kernel: 6.5.11-8-pve
linkmode: dynamic
logDriver: journald
memFree: 8258920448
memTotal: 8589934592
networkBackend: netavark
ociRuntime:
name: crun
package: crun_1.8.1-1+deb12u1_amd64
path: /usr/bin/crun
version: |-
crun version 1.8.1
commit: f8a096be060b22ccd3d5f3ebe44108517fbf6c30
rundir: /run/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
os: linux
remoteSocket:
exists: true
path: /run/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: false
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns_1.2.0-1_amd64
version: |-
slirp4netns version 1.2.0
commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
libslirp: 4.7.0
SLIRP_CONFIG_VERSION_MAX: 4
libseccomp: 2.5.4
swapFree: 0
swapTotal: 0
uptime: 0h 10m 36.00s
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
volume:
- local
registries: {}
store:
configFile: /etc/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: overlay
graphOptions:
overlay.mount_program:
Executable: /usr/local/bin/overlayzfsmount
Package: Unknown
Version: 'mount from util-linux 2.38.1 (libmount 2.38.1: selinux, smack, btrfs,
verity, namespaces, assert, debug)'
overlay.mountopt: nodev
graphRoot: /var/lib/containers/storage
graphRootAllocated: 32212254720
graphRootUsed: 1242038272
graphStatus:
Backing Filesystem: zfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 0
runRoot: /run/containers/storage
volumePath: /var/lib/containers/storage/volumes
version:
APIVersion: 4.3.1
Built: 0
BuiltTime: Thu Jan 1 00:00:00 1970
GitCommit: ""
GoVersion: go1.19.8
Os: linux
OsArch: linux/amd64
Version: 4.3.1

User has its subordinates defined

cockpit@Test:~$ grep cockpit /etc/subuid /etc/subgid
/etc/subuid:cockpit:165536:65536
/etc/subgid:cockpit:165536:65536

The same installation on VM works flawlessly, but on LXC I can't force it to work.
 
when I look into lxc configuration file I see

root@proxmox:~# cat /etc/pve/lxc/106.conf
arch: amd64
cores: 2
features: nesting=1
hostname: Test
memory: 8192
net0: name=eth0,bridge=vmbr2,firewall=1,hwaddr=BC:24:11:49:21:6B,ip=dhcp,type=veth
ostype: debian
rootfs: local-zfs:subvol-106-disk-0,size=30G
swap: 512
unprivileged: 1
root@HomeServer:~#

Doesn't it lack lxc.idmap definition?
 
Last edited:
I am encountering the same problem if the lxc is unprivileged.

But if the container is privileged, then rootless podman seems to work.
 
Last edited:
I haven't tried, because it seems a little senseless in my opinion. In such a solution podman by itself and each container run on LXC root level can have root privileges on the host. I have found few topics where running unprivileged podman in unprivileged lxc container is not recommended or even discouraged because of nested isolation which doesn't work. For sure Proxmox doesn't support it.

So I gave up and run podman in light debian vm with Cockpit. Woks fine but consumes ca 0,5 GB RAM more.
 
Last edited:
I think it depends on your use case. For me, everything of value would already be inside the LXC. If it gets compromised, it doesn't make much difference that technically there's a host over it that's probably fine. To me the value of an LXC is easy backups and snapshots I can revert when testing changes. If it's either/or I'd rather have security handled at the Podman level. I haven't come across anything that suggests running a privileged LXC is WORSE than running packages like podman natively on the host, but it's not the easiest topic to research.
 
I got it working yesterday. Unprivileged Alpine LXC running Podman as a non-root user. It requires nesting to be enabled, haven't had the time to look into this yet.
(PVE: Command on Proxmox host, LXC: Command on LXC)

Give Proxmox root access to more sub UIDs/GUIDs:
Bash:
PVE> vi /etc/subuid
root:100000:200000   # <usr>:<start_uid>:<count>
PVE> vi /etc/subgid
root:100000:200000

Map UIDs/GUIDs of container <VMID> to host UIDs/GUIDs:
Bash:
PVE> vi /etc/pve/lxc/<VMID>.conf
# <container_uid> <host_uid> <count>
lxc.idmap: u 0 100000 165536 # uids 0..165536 (container) -> 100000..265536 (host)
lxc.idmap: g 0 100000 165536 # gids

Test if container UIDs/GUIDs are mapped:
Bash:
LXC> cat /proc/self/uid_map
0 100000 165536

Give unprivileged users in LXC that will use Podman access to sub UIDs/GUIDs:
Bash:
LXC> vi /etc/subuid
username:100000:65536
LXC> vi /etc/subgid
username:100000:65536

Allow LXC access to tun kernel module (required by slirp4netns):
Bash:
PVE> vi /etc/pve/lxc/<VMID>.conf
lxc.cgroup2.devices.allow: c 10:200 rwm  # cgroup2 for PVE >= 7.0
lxc.mount.entry: /dev/net dev/net none bind,create=dir

Install Podman:
Bash:
LXC> apk add podman py3-pip shadow shadow-subids
LXC> pip3 install podman-compose --break-system-packages
LXC> rc-update add cgroups

Login as non-root user and try to run Podma:
Bash:
LXC> podman run --rm hello-world

Alpine seems to require the shadow and shadow-subids packages to make /usr/bin/newuidmap work.
I don't know yet if everything is working, but running podman-compose with a Gotify config I had lying around worked without flaw.

I hope this helps :)
 
A few hard earned tips for any lost souls ending up here:

Just to repeat, in any LXC running podman go to Options > Features and double check that Nesting is enabled (plus Keyctl if it's unprivileged). Failing to do this caused me great suffering.

You can get the latest Podman (better networking/ZFS) directly from Debian by using APT pinning. Just add the testing repo to
/etc/apt/sources.list
Code:
deb https://ftp.debian.org/debian testing contrib main non-free non-free-firmware


Then make /etc/apt/preferences.d/pref and add:
Code:
Package: *
Pin: release a=testing
Pin-Priority: 50


Then run:
apt update
apt-cache policy
Which should show the Testing/Trixie repos and have them marked with a 50. Now you have the repo, but it won't be used except when you manually install a package from it. So apt install -t trixie podman will install/update Podman 4.9+ and its dependencies (including pasta) from the testing repo.



Also a lot of people get stuck on weird errors when doing rootless because using sudo/su to go between accounts can mess with some variables podman relies on. Luckily if you install the systemd-container package you can run
Bash:
machinectl shell --uid <UsernameHere>
for a clean way to launch user sessions from within your root session, and back out again with Ctrl+ ]]]



When allowing TUN networking for the LXC to pass to rootless podman, I like to include both cgroups at the bottom of
/etc/pve/lxc/<ContainerIDHere>.conf just in case:
Code:
lxc.cgroup.devices.allow: c 10:200 rwm
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net dev/net none bind,create=dir
If you've already made snapshots of that LXC and these lines aren't working, you may also need to add them to the base group above the snapshots.



Getting podman containers to run at boot is kind of weird. Unlike docker it's daemonless so nothing runs by default. The old way of running containers automatically is to generate a separate Systemd service for each container. They've marked that option as depreciated on newer versions (to some minor uproar) although they've said there's no ~current~ plans to remove it entirely. The offered replacement is Quadlets, which are a bit like docker compose configs where the whole thing is written to go directly into Systemd, but they're podman specific and have completely different syntax. I like to avoid both and just make a single Systemd service that runs at boot and starts all containers with --restart=always set, making things about as simple as docker.

First linger needs to be enabled for the user running podman, so services can be run automatically as that user without needing to be logged in:
Bash:
loginctl enable-linger <RootlessUsernameHere>

Then as that user create ~/.config/systemd/user/podman-restart.service with the contents:
Code:
[Unit]
Description=Podman Start All Containers With Restart Policy Set To Always
Documentation=man:podman-start(1)
StartLimitIntervalSec=0
Wants=network-online.target
After=network-online.target

[Service]
Type=oneshot
RemainAfterExit=true
Environment=LOGGING="--log-level=info"
ExecStart=podman $LOGGING start --all --filter restart-policy=always
ExecStop=/bin/sh -c 'podman $LOGGING stop $(podman container ls --filter restart-policy=always -q)'

[Install]
WantedBy=default.target

Then start it with:
Bash:
systemctl --user daemon-reload
systemctl --user start podman-restart
systemctl --user enable podman-restart
 
Last edited:
  • Like
Reactions: webtroter
It's also possible to run containers from compose files, which don't support every podman specific feature (like pods) but can be pretty helpful for things only posted in that form like SWAG proxy. Dockge is a great webui for that. Uses a docker compatibility socket to manage containers but doesn't need to stay running once everything is set up, and works great under rootless.

First you install the podman-docker package. I use the testing repo since that's what I'm using for podman:
apt install -t trixie podman-docker

Then set up the socket and add it's variable to bash:
Bash:
systemctl --user enable podman.socket
systemctl --user start podman.socket
systemctl --user status podman.socket
export DOCKER_HOST=unix:///run/user/$UID/podman/podman.sock
echo 'export DOCKER_HOST=unix:///run/user/$UID/podman/podman.sock' >> ~/.bash_profile


Then just install Dockge normally, but before you run its compose.yaml change the volume to
/var/run/user/<UserIDHere>/podman/podman.sock:/var/run/docker.sock

Haven't tried it with Yacht but they also manage podman via socket so ymmv.



If you need to run privileged TCP ports in rootless (i.e. a reverse proxy listening on 443) but don't want to give rootless users access to those ports, the podman documentation points out redir is a rad little alternative.

You just enable linger if you haven't already, then apt install redir, and put the following in a new file like
/etc/systemd/system/redir443.service
Code:
[Unit]
Description=Redirect tcp port 443 to 1443 with redir

[Service]
ExecStart=/bin/redir -sn :443 127.0.0.1:1443

[Install]
WantedBy=multi-user.target
to redirect traffic coming in on host port 443 so a podman service can pick it up on 1443. Rinse and repeat for any other ports you want, then enable:

Bash:
systemctl daemon-reload
systemctl enable --now redir443.service

BTW if you just want to test it till next reboot, you can skip all that and run
redir :443 127.0.0.1:1443
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!