Docker containers fail to start on Proxmox 9 / Debian 13 host (worked fine on Proxmox 8)

Peterkal

New Member
Dec 26, 2023
3
1
3
Hi all,

I’ve been running Docker directly on a Proxmox 8 host with Debian 12 for a long time. My setup includes VMs and Docker containers side by side — everything worked perfectly. Even GPU passthrough to containers (Intel iGPU or NVIDIA) was easy to configure via Docker Compose and worked reliably.

After upgrading to Proxmox 9 / Debian 13, my VMs still work fine, but Docker containers on the host fail to start. I’m getting various errors related to UNIX socket permissions, AppArmor denials, and IPC failures. Even simple containers like Alpine or MariaDB won’t start.

On one upgraded host with an NVIDIA GPU, I had to disable AppArmor entirely to get containers working again — and then everything worked as expected, including GPU passthrough. But disabling AppArmor feels like a risky workaround.

I’m not trying to emphasize GPU passthrough — that’s just a bonus. My main concern is that Docker containers don’t start at all on Proxmox 9 / Debian 13 host, even without any GPU configuration.

Questions:

  1. Is Docker on the host officially supported in Proxmox 9?
  2. Is there an AppArmor profile or configuration that allows Docker containers to run normally on Proxmox 9?
  3. What’s the recommended way to run Docker containers directly on the host without disabling AppArmor or moving everything into a VM?
I’m looking for an official and secure solution — not workarounds that compromise system security.

Thanks in advance!
 
  • Like
Reactions: Tert0
Is Docker on the host officially supported in Proxmox 9?

I’m looking for an official and secure solution — not workarounds that compromise system security.
I think the answer is in the manual: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pct
If you want to run application containers, for example, Docker images, it is recommended that you run them inside a Proxmox QEMU VM.
People have argued about how this is worded in many threads already but I think this answers your first question with a clear No.
Proxmox staff has also mentioned why they don't want to integrate Docker images into Proxmox VE in the past (but I don't know the exact wording or where to find that particular thread, sorry).
 
I think the answer is in the manual: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pct

People have argued about how this is worded in many threads already but I think this answers your first question with a clear No.
Proxmox staff has also mentioned why they don't want to integrate Docker images into Proxmox VE in the past (but I don't know the exact wording or where to find that particular thread, sorry).

Thanks for the reply — I understand the reasoning, and it’s logical and justified from a security and architectural standpoint.

But I do have one question: How is it that on Proxmox 8 with Debian 12, Docker ran perfectly fine directly on the host — including GPU passthrough (both iGPU and NVIDIA)? Everything worked smoothly, no AppArmor issues, no IPC errors, and even complex Docker Compose setups ran without a hitch.

I get that the recommended approach is to run Docker inside a VM, but that comes with limitations: PCI passthrough via IOMMU can only be assigned to one VM, which means I lose the flexibility of sharing the GPU across multiple containers. On the other hand, when Docker runs directly on the host, I can easily share the GPU between containers — which is ideal for workloads like AI, transcoding, etc.

So yes, I understand the philosophy behind Proxmox VE, but from a practical standpoint, this feels like a step backward compared to what worked reliably before. Maybe it would be worth considering a “Docker-friendly mode” on the host — even if it’s limited or unofficial — just to preserve that flexibility for users who need it.
 
A quick, crude kludge to disable AppArmor for Docker without needing to recreate the containers with a different profile:

systemctl edit docker

[Service]
Environment=container="disable apparmor"


From https://github.com/moby/moby/issues/41553#issuecomment-2056845244

Thank you for this! This will get a lot of hate, for sure, but unless someone has a better suggestion, I'm going with it.
I'm not switching to VMs you because of apparmor.
 
  • Like
Reactions: Tert0
There are incredibly valid reasons to run Docker on the host (yes it's a massive hack, goes against the ethos of Proxmox, etcetc), for some situations it's fine.

E.g as mentioned - passthrough isn't always possible with a VM. For example GPU I don't want to give to a VM, and I want to share the disk that I use with Docker (via `data-root` in `daemon.json`).

¯\_(ツ)_/¯
 
I don't want to argue the point in the docs and I'm fully supporting at (as stated numerous times in other threads).

For those who will have it nonetheless:
What about the middle ground and run it in LX(C) containers? You will have the better isolation and access to the hosts drivers and devices if wanted. I switches to this and it's working fine.
 
  • Like
Reactions: joshx
Because it doesn't work every time you need to upgrade (as has been seen in the other threads), and is infact not working without tweaks anyway. I actually couldn't get the overlay2 driver working etc, and is it the really the "blessed" way to do things?

I completely agree with the fact that it's not supported, a "hack", whatever, but it feels like a really big slap in the face.

Also agreed that unless you have a valid reason, just run it in a VM.
 
Also agreed that unless you have a valid reason, just run it in a VM.
Less overhead is a perfectly valid reason for a lot of people.

Maybe it will come back to haunt me one day, but apart from this apparmor change, I had absolutely no issues with docker on either the host or LXCs. At least for homelab servers, where nothing is exposed to interner, or behind a vpn, I just can't find justification for the VM overhead, if it's not needed.
 
Last edited:
Maybe it will come back to haunt me one day, but apart from this apparmor change, I had absolutely no issues with docker on either the host or LXCs

Well except that apparmor change is a real issue. You basically disabled one of the security layers to have it your way. Now call me paranoid but imho "disabling security" should never be a solution and if one need to do it "to save resources" he/she or they should carefully think whether they actually need a hypervisor at all. If the workload consitss mostly of docker containers youone could as well install their favourite Linux server distribution choice with docker and/or podman and an managment interface (e.G. portainer or dockge). Together with vm-manager you could even setup a vm if you ever need one (e.G. for HomeAssistant or Windows)

And if you want to stay with ProxmoxVE the "overhead" of using vms for docker/podman containers is way overblown imho: You can setup a docker vm with Debian or alpine with a minimal specs of 2 GB of RAM or less (I remember someone in this forum said that he has docker vms with 1GB or even less (512 MB) of RAM), if you put all your docker containers in one vm you won't loose much compared to a bare metal install or lxcs.
 
  • Like
Reactions: UdoB
I run a small web application with docker in a debian 13 vm, with 512MB assigned (and no swap).
The times when VMs were slower than bare metal are long gone, but some things stick forever.
 
  • Like
Reactions: Johannes S
If it was one vm to run all docker it wouldn't be a problem, but for multiple, it adds up.

e.g. I have a lot of vlans and need docker containers on most of them. I can't run that on a single docker host (at least not without hardcoding ips that dhcp doesn't see, which I don't want to do). Also most of these hosts need the GPU. With vms I'd have to partition or dedicate it, while LXCs can share it, and each can utilize it fully if/when it needs to.

For this use case, less overhead and convenience, beats apparmor for me, considering it's a homelab fully behind a vpn, with nothing publicly exposed. I know it's not by the book, but I don't think it's sacrilege either, and I should give up on proxmox completely just because I'm bypassing apparmor.
 
You can have multiple vlans on a single VM. They can all use DHCP.
Can you please share how? I'm no expert, but I couldn't get that to work on bare metal docker. Only with macvlans and that required hardcoding ips. When I asked around for help, no one suggested a solution, and pointed me to proxmox.

I'm not switching now, I'm quite happy with my proxmox setup, but I'd really like to know how to do that on bare metal.

Btw. I don't have complete connectivity between vlans. That was one of the main points of using them. Pretty strict firewall for inter-vlan stuff, and cutting off internet on most of them.
 
Last edited: