Docker inside LXC (net.ipv4.ip_unprivileged_port_start error)

It’s a workaround. Is not a solution for the moment.
And a rather nasty one since it removes AppArmors additional security mechanism for the container. I can't believe people think, that this should be adopted as a fix in an update of ProxmoxVE. Maybe it's time to change the wording in the doc from "it's recommended to run docker/podman inside vms " to "don't use docker/poman etc in lxcs, it will break". @Neobin filed a ticket for this some years agp , maybe time to revive it? https://bugzilla.proxmox.com/show_bug.cgi?id=4712
 
That problem of "resource allocation" isn't one really. Of course if you want to have one VM for each docker container you want to run you will end up in using more RAM (but not neccesarily way more see https://pve.proxmox.com/wiki/Dynamic_Memory_Management#KSM ). But normally you wouldn't do this but run all your docker containers in one lightweight VM. My main docker Debian Trixie vm is configured with 4 GB RAM, right now it uses 1.5 GB. And this can propably reduced even more without changing anything, since Linux always uses part of the memory as cache. By changing the VM os to alpine an even more lightweight VM should be possible. Another benefit of fitting all docker containers in one vm is, that you need the system maintenance (like updates etc) only once instead of doing housekeeping for every lxc instance.
I prefer to save on my time budget instead of saving RAM for the sake of saving RAM.
But if for the sake of "saving resources" you prefer to waste your private time by trouble shooting after breaking changes be my guest.
I think it's an uncontroversial statement that VMs require more resources than LXCs. By definition, the VM will always need a resource allocation that is separate from the host. Anything that you do to reduce the resource usage on a VM (e.g. use alpine) can be done with an LXC, but the resources you have to reserve for the VM host will always be there.

For many users getting started with a homelab, their time is cheaper than hardware. My docker host is an LXC because I started with a very old PC with limited RAM and CPU. I could have put it in a VM, but I wouldn't have been able to run as many services on that hardware.
 
And a rather nasty one since it removes AppArmors additional security mechanism for the container. I can't believe people think, that this should be adopted as a fix in an update of ProxmoxVE. Maybe it's time to change the wording in the doc from "it's recommended to run docker/podman inside vms " to "don't use docker/poman etc in lxcs, it will break". @Neobin filed a ticket for this some years agp , maybe time to revive it? https://bugzilla.proxmox.com/show_bug.cgi?id=4712
I think downgrading to 1.7.28-1 is a better workaround at the moment.

The *recommendation* to use a VM is valid, but nested containers are supposed to work. As long as the upstream projects support nested containers, Proxmox shouldn't be telling users they can't use them.
 
Maybe it's time to change the wording in the doc from "it's recommended to run docker/podman inside vms " to "don't use docker/poman etc in lxcs, it will break". @Neobin filed a ticket for this some years agp , maybe time to revive it? https://bugzilla.proxmox.com/show_bug.cgi?id=4712
And the solid reason is...
Really curious, personal preferences aside. I'm a kind of IT guy, can understand technical arguments.
Better isolation in VMs? Not everybody needs it. Not worse than docker on bare metal.
Unstable? Didn't see it for the last 5-6 years. All the links to issues look more like individual ones. The current is more related to overprotection (https://github.com/lxc/incus/pull/2624). As I got it, it literally broke `nesting` option.
Insecure? LXC in general are less secure comparing to VMs. Although Proxmox Web UI (unlike pct tool, btw) defaults create pretty secure LXC containers, the options to make them full of holes are still there.

For now I just see something like holly war against docker in LXC containers. Don't because don't.
 
Last edited:
Just to add another data point here — exact same breakage also hit Ubuntu 24.04 (Noble) LXC on Proxmox.

Code:
open sysctl net.ipv4.ip_unprivileged_port_start file: reopen fd N: permission denied: unknown

What worked for me:
Code:
apt install -y --allow-downgrades containerd.io=1.7.28-1~ubuntu.24.04~noble
apt-mark hold containerd.io
systemctl restart containerd docker wings


After downgrading + hold — Docker & Wings start fine again inside the unprivileged LXC.
 
  • Like
Reactions: talos and shib53
apt-mark hold containerd.io

As soon as a working update is available you should revert this with apt-mark unhold containerd.io because otherwise any future update won't be installed. And in general it's not a good idea to sit on old versions forever since they might and will contain security or other bugs not present in the newest version.
 
For LXC using Debian 12 (Bookworm), this worked for me:
apt install containerd.io=1.7.28-1~debian.12~bookworm
 
And a rather nasty one since it removes AppArmors additional security mechanism for the container. I can't believe people think, that this should be adopted as a fix in an update of ProxmoxVE. Maybe it's time to change the wording in the doc from "it's recommended to run docker/podman inside vms " to "don't use docker/poman etc in lxcs, it will break". @Neobin filed a ticket for this some years agp , maybe time to revive it? https://bugzilla.proxmox.com/show_bug.cgi?id=4712
With all due respect to the maintainer*, as soon as they can get gpu's cheap enough so we can all allocate one to each vm or afford cards supporting vGPUs then that may be the right recommendation. Not all of us can afford this.

[edit]: * Your jobs are hard enough and mostly without thank yet most of us owe you a great deal
 
Last edited:
  • Like
Reactions: JonathanTreffler
With all due respect to the maintainer*, as soon as they can get gpu's cheap enough so we can all allocate one to each vm or afford cards supporting vGPUs then that may be the right recommendation. Not all of us can afford this.
vGPUs has nothing to do with the question how to run application containers like docker or podman. There are a lot of docker containers (e.G. paperless, nginx-proxy-manager,pangolin...) you can run without needing a GPU. And for software where you might want to utilice the GPUs there is often a way to setup them without docker, e.G. jellyfin describes a setup for that on their website on Debian (can be used for a Debian lxc): https://jellyfin.org/docs/general/installation/linux/
You also don't need one GPU for each docker vm (if you decide to have a dedicated GPU) it's enough to have one VM where all GPU-specific workloads can share it.

I'm aware that there are still apps (e.g. immich) where the utilication of an iGPU via lxcs might be desirable but who don't support an installation without docker. But even then it's a good habit to run most docker-specific stuff in a vm and only use docker in lxcs where it's not possible or supported in another way. This reduce the propability of getting hit by an update-related breakage of docker inside lxcs.
 
Does anyone have a solution for alpine os?
I have containerd.io version 2.1.3-2 installed (which is not the latest version), but I still have this issue.