It’s a workaround. Is not a solution for the moment.@lynze Could you add the link to solution in the head post. https://github.com/opencontainers/runc/issues/4968
It’s a workaround. Is not a solution for the moment.@lynze Could you add the link to solution in the head post. https://github.com/opencontainers/runc/issues/4968
And a rather nasty one since it removes AppArmors additional security mechanism for the container. I can't believe people think, that this should be adopted as a fix in an update of ProxmoxVE. Maybe it's time to change the wording in the doc from "it's recommended to run docker/podman inside vms " to "don't use docker/poman etc in lxcs, it will break". @Neobin filed a ticket for this some years agp , maybe time to revive it? https://bugzilla.proxmox.com/show_bug.cgi?id=4712It’s a workaround. Is not a solution for the moment.
I think it's an uncontroversial statement that VMs require more resources than LXCs. By definition, the VM will always need a resource allocation that is separate from the host. Anything that you do to reduce the resource usage on a VM (e.g. use alpine) can be done with an LXC, but the resources you have to reserve for the VM host will always be there.That problem of "resource allocation" isn't one really. Of course if you want to have one VM for each docker container you want to run you will end up in using more RAM (but not neccesarily way more see https://pve.proxmox.com/wiki/Dynamic_Memory_Management#KSM ). But normally you wouldn't do this but run all your docker containers in one lightweight VM. My main docker Debian Trixie vm is configured with 4 GB RAM, right now it uses 1.5 GB. And this can propably reduced even more without changing anything, since Linux always uses part of the memory as cache. By changing the VM os to alpine an even more lightweight VM should be possible. Another benefit of fitting all docker containers in one vm is, that you need the system maintenance (like updates etc) only once instead of doing housekeeping for every lxc instance.
I prefer to save on my time budget instead of saving RAM for the sake of saving RAM.
But if for the sake of "saving resources" you prefer to waste your private time by trouble shooting after breaking changes be my guest.
I think downgrading to 1.7.28-1 is a better workaround at the moment.And a rather nasty one since it removes AppArmors additional security mechanism for the container. I can't believe people think, that this should be adopted as a fix in an update of ProxmoxVE. Maybe it's time to change the wording in the doc from "it's recommended to run docker/podman inside vms " to "don't use docker/poman etc in lxcs, it will break". @Neobin filed a ticket for this some years agp , maybe time to revive it? https://bugzilla.proxmox.com/show_bug.cgi?id=4712
And the solid reason is...Maybe it's time to change the wording in the doc from "it's recommended to run docker/podman inside vms " to "don't use docker/poman etc in lxcs, it will break". @Neobin filed a ticket for this some years agp , maybe time to revive it? https://bugzilla.proxmox.com/show_bug.cgi?id=4712
open sysctl net.ipv4.ip_unprivileged_port_start file: reopen fd N: permission denied: unknown
apt install -y --allow-downgrades containerd.io=1.7.28-1~ubuntu.24.04~noble
apt-mark hold containerd.io
systemctl restart containerd docker wings
apt-mark hold containerd.io
With all due respect to the maintainer*, as soon as they can get gpu's cheap enough so we can all allocate one to each vm or afford cards supporting vGPUs then that may be the right recommendation. Not all of us can afford this.And a rather nasty one since it removes AppArmors additional security mechanism for the container. I can't believe people think, that this should be adopted as a fix in an update of ProxmoxVE. Maybe it's time to change the wording in the doc from "it's recommended to run docker/podman inside vms " to "don't use docker/poman etc in lxcs, it will break". @Neobin filed a ticket for this some years agp , maybe time to revive it? https://bugzilla.proxmox.com/show_bug.cgi?id=4712
vGPUs has nothing to do with the question how to run application containers like docker or podman. There are a lot of docker containers (e.G. paperless, nginx-proxy-manager,pangolin...) you can run without needing a GPU. And for software where you might want to utilice the GPUs there is often a way to setup them without docker, e.G. jellyfin describes a setup for that on their website on Debian (can be used for a Debian lxc): https://jellyfin.org/docs/general/installation/linux/With all due respect to the maintainer*, as soon as they can get gpu's cheap enough so we can all allocate one to each vm or afford cards supporting vGPUs then that may be the right recommendation. Not all of us can afford this.
cd /etc/pve/lxc
nano 110.conf
lxc.apparmor.profile: unconfined
pct enter 110
reboot
reboot
vGPUs has nothing to do with the question how to run application containers like docker or podman. There are a lot of docker containers (e.G. paperless, nginx-proxy-manager,pangolin...) you can run without needing a GPU. And for software where you might want to utilice the GPUs there is often a way to setup them without docker, e.G. jellyfin describes a setup for that on their website on Debian (can be used for a Debian lxc): https://jellyfin.org/docs/general/installation/linux/
You also don't need one GPU for each docker vm (if you decide to have a dedicated GPU) it's enough to have one VM where all GPU-specific workloads can share it.
I'm aware that there are still apps (e.g. immich) where the utilication of an iGPU via lxcs might be desirable but who don't support an installation without docker. But even then it's a good habit to run most docker-specific stuff in a vm and only use docker in lxcs where it's not possible or supported in another way. This reduce the propability of getting hit by an update-related breakage of docker inside lxcs.
apt install containerd.io=1.7.28-1~ubuntu.24.04~noble/etc/docker/daemon.json{
"min-api-version": "1.24"
}
This may be true for you but not for everyone:The thing is, that LXC with Docker worked now for almost 2 years without any issues.
If Proxmox doesnt want to support LXC Containers properly, remove LXC entirely then...
Otherwise LXC is superrior for tons of reasons compared to VM's, especially for KI Workloads where you need to utilize L3-Caches and Numa Properly if you run llama.cpp/ik_llama or ollama or pytorch on CPUs with a little GPU Offload on Epyc Systems.
I disabled AppArmor within LXC config as my container is trusted and not exposed directly to the internet.Does anyone have a solution for alpine os?
I have containerd.io version 2.1.3-2 installed (which is not the latest version), but I still have this issue.
Yea that's the same what I did now. Only think I'm wondering is, that I did not update any package within my LXC container and the issue still occurs.I disabled AppArmor within LXC config as my container is trusted and not exposed directly to the internet.
We use essential cookies to make this site work, and optional cookies to enhance your experience.