Docker inside LXC (net.ipv4.ip_unprivileged_port_start error)

It’s a workaround. Is not a solution for the moment.
And a rather nasty one since it removes AppArmors additional security mechanism for the container. I can't believe people think, that this should be adopted as a fix in an update of ProxmoxVE. Maybe it's time to change the wording in the doc from "it's recommended to run docker/podman inside vms " to "don't use docker/poman etc in lxcs, it will break". @Neobin filed a ticket for this some years agp , maybe time to revive it? https://bugzilla.proxmox.com/show_bug.cgi?id=4712
 
That problem of "resource allocation" isn't one really. Of course if you want to have one VM for each docker container you want to run you will end up in using more RAM (but not neccesarily way more see https://pve.proxmox.com/wiki/Dynamic_Memory_Management#KSM ). But normally you wouldn't do this but run all your docker containers in one lightweight VM. My main docker Debian Trixie vm is configured with 4 GB RAM, right now it uses 1.5 GB. And this can propably reduced even more without changing anything, since Linux always uses part of the memory as cache. By changing the VM os to alpine an even more lightweight VM should be possible. Another benefit of fitting all docker containers in one vm is, that you need the system maintenance (like updates etc) only once instead of doing housekeeping for every lxc instance.
I prefer to save on my time budget instead of saving RAM for the sake of saving RAM.
But if for the sake of "saving resources" you prefer to waste your private time by trouble shooting after breaking changes be my guest.
I think it's an uncontroversial statement that VMs require more resources than LXCs. By definition, the VM will always need a resource allocation that is separate from the host. Anything that you do to reduce the resource usage on a VM (e.g. use alpine) can be done with an LXC, but the resources you have to reserve for the VM host will always be there.

For many users getting started with a homelab, their time is cheaper than hardware. My docker host is an LXC because I started with a very old PC with limited RAM and CPU. I could have put it in a VM, but I wouldn't have been able to run as many services on that hardware.
 
And a rather nasty one since it removes AppArmors additional security mechanism for the container. I can't believe people think, that this should be adopted as a fix in an update of ProxmoxVE. Maybe it's time to change the wording in the doc from "it's recommended to run docker/podman inside vms " to "don't use docker/poman etc in lxcs, it will break". @Neobin filed a ticket for this some years agp , maybe time to revive it? https://bugzilla.proxmox.com/show_bug.cgi?id=4712
I think downgrading to 1.7.28-1 is a better workaround at the moment.

The *recommendation* to use a VM is valid, but nested containers are supposed to work. As long as the upstream projects support nested containers, Proxmox shouldn't be telling users they can't use them.
 
Maybe it's time to change the wording in the doc from "it's recommended to run docker/podman inside vms " to "don't use docker/poman etc in lxcs, it will break". @Neobin filed a ticket for this some years agp , maybe time to revive it? https://bugzilla.proxmox.com/show_bug.cgi?id=4712
And the solid reason is...
Really curious, personal preferences aside. I'm a kind of IT guy, can understand technical arguments.
Better isolation in VMs? Not everybody needs it. Not worse than docker on bare metal.
Unstable? Didn't see it for the last 5-6 years. All the links to issues look more like individual ones. The current is more related to overprotection (https://github.com/lxc/incus/pull/2624). As I got it, it literally broke `nesting` option.
Insecure? LXC in general are less secure comparing to VMs. Although Proxmox Web UI (unlike pct tool, btw) defaults create pretty secure LXC containers, the options to make them full of holes are still there.

For now I just see something like holly war against docker in LXC containers. Don't because don't.
 
Last edited:
Just to add another data point here — exact same breakage also hit Ubuntu 24.04 (Noble) LXC on Proxmox.

Code:
open sysctl net.ipv4.ip_unprivileged_port_start file: reopen fd N: permission denied: unknown

What worked for me:
Code:
apt install -y --allow-downgrades containerd.io=1.7.28-1~ubuntu.24.04~noble
apt-mark hold containerd.io
systemctl restart containerd docker wings


After downgrading + hold — Docker & Wings start fine again inside the unprivileged LXC.
 
apt-mark hold containerd.io

As soon as a working update is available you should revert this with apt-mark unhold containerd.io because otherwise any future update won't be installed. And in general it's not a good idea to sit on old versions forever since they might and will contain security or other bugs not present in the newest version.
 
For LXC using Debian 12 (Bookworm), this worked for me:
apt install containerd.io=1.7.28-1~debian.12~bookworm
 
And a rather nasty one since it removes AppArmors additional security mechanism for the container. I can't believe people think, that this should be adopted as a fix in an update of ProxmoxVE. Maybe it's time to change the wording in the doc from "it's recommended to run docker/podman inside vms " to "don't use docker/poman etc in lxcs, it will break". @Neobin filed a ticket for this some years agp , maybe time to revive it? https://bugzilla.proxmox.com/show_bug.cgi?id=4712
With all due respect to the maintainer*, as soon as they can get gpu's cheap enough so we can all allocate one to each vm or afford cards supporting vGPUs then that may be the right recommendation. Not all of us can afford this.

[edit]: * Your jobs are hard enough and mostly without thank yet most of us owe you a great deal
 
Last edited:
  • Like
Reactions: JonathanTreffler
With all due respect to the maintainer*, as soon as they can get gpu's cheap enough so we can all allocate one to each vm or afford cards supporting vGPUs then that may be the right recommendation. Not all of us can afford this.
vGPUs has nothing to do with the question how to run application containers like docker or podman. There are a lot of docker containers (e.G. paperless, nginx-proxy-manager,pangolin...) you can run without needing a GPU. And for software where you might want to utilice the GPUs there is often a way to setup them without docker, e.G. jellyfin describes a setup for that on their website on Debian (can be used for a Debian lxc): https://jellyfin.org/docs/general/installation/linux/
You also don't need one GPU for each docker vm (if you decide to have a dedicated GPU) it's enough to have one VM where all GPU-specific workloads can share it.

I'm aware that there are still apps (e.g. immich) where the utilication of an iGPU via lxcs might be desirable but who don't support an installation without docker. But even then it's a good habit to run most docker-specific stuff in a vm and only use docker in lxcs where it's not possible or supported in another way. This reduce the propability of getting hit by an update-related breakage of docker inside lxcs.
 
Does anyone have a solution for alpine os?
I have containerd.io version 2.1.3-2 installed (which is not the latest version), but I still have this issue.
 
In your proxmox host, navigate to
Code:
cd /etc/pve/lxc

here you will find your lxc configuration files

Code:
nano 110.conf
(substitute the number of your lxc that you want to edit)

move your cursor to below the line:

tags:

and enter this as a new line
Code:
lxc.apparmor.profile: unconfined

press ctrl-x to save and follow the prompts ('Y' then 'enter')

reboot your lxc container (from your proxmox host)
Code:
pct enter 110
reboot

or in your lxc container
Code:
reboot



edit: tested on Debian 13; this does resolve the problem but this does remove a security layer.
edit2: https://github.com/opencontainers/runc/issues/4968#issuecomment-3500775431
 
Last edited:
vGPUs has nothing to do with the question how to run application containers like docker or podman. There are a lot of docker containers (e.G. paperless, nginx-proxy-manager,pangolin...) you can run without needing a GPU. And for software where you might want to utilice the GPUs there is often a way to setup them without docker, e.G. jellyfin describes a setup for that on their website on Debian (can be used for a Debian lxc): https://jellyfin.org/docs/general/installation/linux/
You also don't need one GPU for each docker vm (if you decide to have a dedicated GPU) it's enough to have one VM where all GPU-specific workloads can share it.

I'm aware that there are still apps (e.g. immich) where the utilication of an iGPU via lxcs might be desirable but who don't support an installation without docker. But even then it's a good habit to run most docker-specific stuff in a vm and only use docker in lxcs where it's not possible or supported in another way. This reduce the propability of getting hit by an update-related breakage of docker inside lxcs.

I would agree if it where that simple. Many applications actually useful need gpu resources. We need a way to provide this to the application in a reasonable manner. Sure I can run the stuff that does not need gpu in a vm running docker, (thats what I do) but we are still left with needing gpu resources for many others.

1. Sure I can pass through my gpu to a vm, that would loose me local console access because I would lose display output from proxmox.

2. Most application that come with a docker option actively discourages you from installing without docker. If they do allow it then history has proven this method may be discontinued by the maintainers because of the issues in providing support to all these installation.

3. And if you need more than 1 of these applications then you installing these without the separation provided by docker runs the real risk of conflicts. This certainly would cause as much grief as a docker <> LXC incompatibility every now and then.

Until we can get vGPU reasonably, passing it to docker via LXC is the only reasonable option to run a bunch of applications that all need GPU resources.
Simply stating that docker on lxc will break will not stop it being done cause its an unfortunate necessity at the moment.
 
In the meantime there is one more thing you need as Workaround.
On my 24.04 LXC Containers i need:
apt install containerd.io=1.7.28-1~ubuntu.24.04~noble
And additionally now in /etc/docker/daemon.json
Code:
{
  "min-api-version": "1.24"
}

Without the min-api-version, traefik wont work anymore.

--> However, is there some news when this will get fixed and if its even on the list to be fixed?

The thing is, that LXC with Docker worked now for almost 2 years without any issues.
If Proxmox doesnt want to support LXC Containers properly, remove LXC entirely then...
Otherwise LXC is superrior for tons of reasons compared to VM's, especially for KI Workloads where you need to utilize L3-Caches and Numa Properly if you run llama.cpp/ik_llama or ollama or pytorch on CPUs with a little GPU Offload on Epyc Systems.
(The Only way to run Kimi-K2 and other Big Models with a reasonable speed, or custom vector databases with embedding models and pytorch)
Due to ~460GB/s memory Bandwith and AVX512 im getting on Kimi-K2 15-20 token/s. (Zentorch)

This is not possible in VM's!!!
Numa doesnt work properly in VM's on Proxmox, L3 Cache affinity is not possible either.

Cheers
Ping @t.lamprecht @fabian
 
Last edited:
The thing is, that LXC with Docker worked now for almost 2 years without any issues.
This may be true for you but not for everyone:

If Proxmox doesnt want to support LXC Containers properly, remove LXC entirely then...

They support them properly but they also recommend not to do something which is know to break from time to time. So people can decide whether they want to use docker in lxcs nontheless (for whatever reason) or don't bother with it before they prefer less breakage.
For the current issues there is already a patch referenced in the bug ticket https://bugzilla.proxmox.com/show_bug.cgi?id=7006 which implements something similiar as the bugfix for Incus. It also contains a in-deep-explaination by an incus developer that the actual root cause is not with ProxmoxVE, Incus or lxc but with the way AppArmor is impementing stuff.


Otherwise LXC is superrior for tons of reasons compared to VM's, especially for KI Workloads where you need to utilize L3-Caches and Numa Properly if you run llama.cpp/ik_llama or ollama or pytorch on CPUs with a little GPU Offload on Epyc Systems.

This is plain wrong, because it implies that they are always superior. This simply isn't true but depends on the usecase and what you want to achieve and which preferences or constraints to have. This is even true if the hosted applications are the same. For example it's possible to setup shared web hosting with lxcs or with vms. Depending on your goals you might decide to do shared web hosting with lxcs (so you can fit more customers on one host) or with VMs (so you have a stricter isolation between customers and hosts). Another example: In theory it's possible to have OpenMediaVault in an lxc but it's not supported and actually doesn't work as intended: https://forum.openmediavault.org/in...ccess-additional-drives-via-proxmox-with-lxc/ You also can't use lxcs to host a non-Linux-environment on ProxmoxVE for Windows or BSD (like OPNSense) you must use a VM. This doesn't make vms superior to lxc, both simply serves different usecases.
 
Does anyone have a solution for alpine os?
I have containerd.io version 2.1.3-2 installed (which is not the latest version), but I still have this issue.
I disabled AppArmor within LXC config as my container is trusted and not exposed directly to the internet.
 
  • Like
Reactions: Zoker
I disabled AppArmor within LXC config as my container is trusted and not exposed directly to the internet.
Yea that's the same what I did now. Only think I'm wondering is, that I did not update any package within my LXC container and the issue still occurs.

Is it an option to install an old containered.io package version?