Docker support in Proxmox

Incus allows to run oci-Containers in recent versions. The support is limited though ( if I recsll correctly not all features from docker work).
yea and it never will

if youw ant that you need portainer on baremetal but you basically loose VM functionality because networking will be destroyed by portainer

these 2 thing dont go together, you really need to basically recreate portainer in you vm solution

and even then you still have the issue to be logical on different layer so your resource management goes out of the window

and many features bite each other like failover or HA, backups etc have very different concepts between the worlds

i understand the wish. it looks like as if it would be so nice, on the surface

well it doesnt work like that,

if we had this you wouldn't want to work with this, because either way, something has to take a big compromise.

either compose files wont work as you expect em to, or vms wont.. and managing that mess is fuel of nightmares

again psychological trap - docker equals - a vm
lcx makes this trap worse because its kinda in between. and it shows already (privilege vs unprivileged etc.. )
 
Last edited:
  • Like
Reactions: Johannes S
ok seem i was not clear enough

docker is not able todo this, docker cant be a replacement of a vm. its an application container and need and expect the infrastructure to give it what it wants and needs


you are the wrong layer here. proxmox is the first layer after hardware.
it does the infrastructure

on top of that is where things like portainer love to live a free life. free of worries what about storage or cpu cycles, they just get them

@bofh we’re actually in violent agreement on layers. Docker isn’t a VM, and Proxmox is the infrastructure plane. I’m not asking PVE to run Docker or schedule containers. The ask is tiny, opt‑in glue at the edge so people land on the right infra pattern faster: a "Container Host VM" preset (cloud‑init + sane defaults) and lightweight discovery/link‑out to Portainer that lives on top. No Docker on the node, no container networking on the node, no PaaS pivot.

If even a "Containers" pane feels like the wrong layer, start off by dropping it for the first test stab into this territory - make it zero introspection: mark VMs tagged as container hosts, show basic health, and provide Open in Portainer. That still prevents the recurring "just install Docker on PVE" mistakes, gives small teams a paved road, and keeps experts free to keep using Ansible/Terraform as they do today.

Think of it like how PVE integrates Ceph/PBS: Proxmox stays IaaS, but offers a bit of first‑class UX so the right tooling "on top" is easy to reach and hard to misuse. Boundaries intact, friction reduced.

Then revisit the "Containers" pane after this phase and test.
 
@bofh we’re actually in violent agreement on layers. Docker isn’t a VM, and Proxmox is the infrastructure plane. I’m not asking PVE to run Docker or schedule containers. The ask is tiny, opt‑in glue at the edge so people land on the right infra pattern faster: a "Container Host VM" preset (cloud‑init + sane defaults) and lightweight discovery/link‑out to Portainer that lives on top. No Docker on the node, no container networking on the node, no PaaS pivot.

If even a "Containers" pane feels like the wrong layer, start off by dropping it for the first test stab into this territory - make it zero introspection: mark VMs tagged as container hosts, show basic health, and provide Open in Portainer. That still prevents the recurring "just install Docker on PVE" mistakes, gives small teams a paved road, and keeps experts free to keep using Ansible/Terraform as they do today.

Think of it like how PVE integrates Ceph/PBS: Proxmox stays IaaS, but offers a bit of first‑class UX so the right tooling "on top" is easy to reach and hard to misuse. Boundaries intact, friction reduced.

Then revisit the "Containers" pane after this phase and test.

thats the issue you cant seperate these things.
people not just load that one plain simple docker file

they will relentless copy paste github the nastiest composefile they stumble on
and nothing will work

the things you mention are just a tiny subset what docker does. implementing this is not only ugly (and still barely mangeable) but it would make look proxmox bad.

besides features are expensive, no not in terms of money and time (that too) but in terms of other cost.
like bloat, complexity, time to learn the product


so proxmox ahs to be careful what to implement and weight a cost to result ratio.
theres a reason why proxmox has no built in wireguard management.
 
  • Like
Reactions: Johannes S
what an inetresting thread from the pre Aug 2025 confusion that unprivilieged containers running as root in docker run as root (they don't, and anyone who thinks they do has likely forgotton or never realized group and user masks are not a secuity boundary, it why ACLs are more secure)

to this recent conversation on docker

for me (and i am a voice on 1 i get that) proxmox is a hypervisor with the interesting but not that compelling LXC thing tacked on (with a secuity model that is barely better than docker).

  1. i agree with the assertion that adding docker to the core platform is a lot of work
  2. i don't agree that the issues with docker networking that folks have articulated are make or break - they can be mitigated and resolved (proof point trunas apps that use docker) - but see point #1 - lot of work... i am not sure it is worth it and I am huge advocate of docker
  3. I agree that for those that want to use docker in a well behaved way that doesnt tatto the platfom with crud a community script would be useful, i would advocate that could achieve what was asked for by bootstrapping a docker VM and installing portainer CE in it and using virtiofs to surface a bind mount location of the users choice (this is how i do it)
  4. i don't agree #3 implies the interface has to be in the proxmox ui
i won't pretend to understand what proxmox's paid customers want and how that should inflluence what home labbers want, i do have a preference that promxox is a stable and full features hypervisor first and foremost

of course if the people who wanted docker in the proxmox platform and created PRs that did that and solved all the issues with networking, i am sure they would get considered /s ... sort of ...
:p
 
  • Like
Reactions: Johannes S
I agree that for those that want to use docker in a well behaved way that doesnt tatto the platfom with crud a community script would be useful, i would advocate that could achieve what was asked for by bootstrapping a docker VM and installing portainer CE in it and using virtiofs to surface a bind mount location of the users choice (this is how i do it)
This is the part I don't get. Any sysadmin worth his salt should be able to setup such a thing. I mean I udnerstand that it might be useful to have something like a template for a docker+portainer vm for homeusers. But to be honest ProxmoxVE isn't the best plattform for people who "just want to run some self-hosted services without becoming a sysadmin". A NAS OS with docker and/or VM support is way more suited for that usecase. UnRaid or other NAS OSes obvivouvsly have less features and flexibility than ProxmoxVE. But imho for most homeusers this flexibility and features are not really needed and the additional complexity of administrating PVE not really worth it.But things that "just work" are less suited for generating clicks and revenues for youtubers so here we are ¯\_(ツ)_/¯
 
  • Like
Reactions: scyto
This is the part I don't get. Any sysadmin worth his salt should be able to setup such a thing. I mean I udnerstand that it might be useful to have something like a template for a docker+portainer vm for homeusers. But to be honest ProxmoxVE isn't the best plattform for people who "just want to run some self-hosted services without becoming a sysadmin". A NAS OS with docker and/or VM support is way more suited for that usecase. UnRaid or other NAS OSes obvivouvsly have less features and flexibility than ProxmoxVE. But imho for most homeusers this flexibility and features are not really needed and the additional complexity of administrating PVE not really worth it.But things that "just work" are less suited for generating clicks and revenues for youtubers so here we are ¯\_(ツ)_/¯
i understand why you say that and it would also imply that all the community scripts are not needed, no one should use images on dockerhub made by others and every sysadmin worth their salt should compile proxmox and the kernel from source for every install (that last bit i did have to do one time, lol)

i know that I am taking your statement to the illogical conclusion, i am hopefully just trying to get you to be less gate-keepy about other peoples requirements (this isn't the truenas community you know - where thou shall not ask for things the MVPs don't want) and accept different people have different desires / wants / capability levels for a OSS solution used by a community of young and old, IT experts, and people who only ever sysadmin at home.... the beauty of OSS is it generally can be adopted by anyone who is curious and wants to learn.

now if you are talking about a business sysadmin - i am 100% with you ;-)

and as for YT'ers, yes their unraid and truenas videos are as awfull and wrong as their proxmox ones (with a few notable exceptions)

and for the record i install my docker VMs and proxmox by hand (and my new k8s VMs) i really need to get down and learn ansible....
 
Last edited:
  • Like
Reactions: Johannes S
thats the issue you cant seperate these things.
people not just load that one plain simple docker file

they will relentless copy paste github the nastiest composefile they stumble on
and nothing will work

the things you mention are just a tiny subset what docker does. implementing this is not only ugly (and still barely mangeable) but it would make look proxmox bad.

besides features are expensive, no not in terms of money and time (that too) but in terms of other cost.
like bloat, complexity, time to learn the product


so proxmox ahs to be careful what to implement and weight a cost to result ratio.
theres a reason why proxmox has no built in wireguard management.
@bofh seriously, I’m with you on the dangers. Layer bleed, compose expectations, and host networking landmines are real. That’s exactly why I’m not asking PVE to expose containers, parse compose, or touch guest networking at all. MVP only, explicitly scoped.

What I’m proposing is smaller than "features," and cheaper than support debt we already see:
  • a Container‑Host VM preset (just a cloud‑init snippet with sane defaults for containerd/Docker and overlay2 on ZFS≥2.2/xfs),
  • a single optional External Manager URL on a VM that renders one "Open Manager" button (Portainer/K8s dashboard lives in the VM), and
  • a clear “no Docker on the node” banner with a one‑click path to create that VM instead.
No Docker on the host, no container introspection, no compose UI, no scheduling semantics. It’s a paved road for the right pattern and a hyperlink - nothing that makes PVE look responsible for app‑level behavior. Net effect: fewer "I broke my node with Docker" threads, faster on‑ramp for small teams, and zero bloat to core PVE concepts. If even the button feels too much, start with template + docs only; the win is still there.

I'm talking about an onramp to give a defined path and to begin exploration to see if this makes sense as time progresses.

what an inetresting thread from the pre Aug 2025 confusion that unprivilieged containers running as root in docker run as root (they don't, and anyone who thinks they do has likely forgotton or never realized group and user masks are not a secuity boundary, it why ACLs are more secure)

to this recent conversation on docker

for me (and i am a voice on 1 i get that) proxmox is a hypervisor with the interesting but not that compelling LXC thing tacked on (with a secuity model that is barely better than docker).

  1. i agree with the assertion that adding docker to the core platform is a lot of work
  2. i don't agree that the issues with docker networking that folks have articulated are make or break - they can be mitigated and resolved (proof point trunas apps that use docker) - but see point #1 - lot of work... i am not sure it is worth it and I am huge advocate of docker
  3. I agree that for those that want to use docker in a well behaved way that doesnt tatto the platfom with crud a community script would be useful, i would advocate that could achieve what was asked for by bootstrapping a docker VM and installing portainer CE in it and using virtiofs to surface a bind mount location of the users choice (this is how i do it)
  4. i don't agree #3 implies the interface has to be in the proxmox ui
i won't pretend to understand what proxmox's paid customers want and how that should inflluence what home labbers want, i do have a preference that promxox is a stable and full features hypervisor first and foremost

of course if the people who wanted docker in the proxmox platform and created PRs that did that and solved all the issues with networking, i am sure they would get considered /s ... sort of ...
:p

@scyto I’m with you on not dragging Docker "into core," and I agree the networking pain is solvable - inside a VM. What I’m proposing is basically to codify the good path you already use with the smallest possible change set, shipped as an MVP, not a feature bucket:
  • Phase 0 (docs only): An official "don’t run Docker on the node" guardrail with a one‑click link to create a container‑host VM, plus published cloud‑init snippets (containerd/Docker, cgroup v2, overlay2 on xfs/ZFS≥2.2).
  • Phase 1 (preset): A first‑class Container‑Host VM preset that’s just GUI sugar over those snippets. No container UI in PVE, no compose parsing, no scheduling, no guest‑networking tweaks.
  • Phase 2 (optional nicety): A single External Manager URL field on that VM that renders an "Open Manager" button. If you point it at Portainer running in the VM, great; if not, nothing appears.
On storage: I’m fine with virtiofs if folks understand the HA/backup semantics, but I’d actually prefer a dedicated vdisk (or NFS/CephFS) for Docker data to keep PBS and migration clean. On networking: keep all Docker networks inside the VM; if people need macvlan/ipvlan or to disable Docker iptables, that’s their lane - PVE doesn’t touch it.

This isn’t gate‑keeping; it’s a paved on‑ramp for newcomers that reduces forum breakage, while leaving power users with Ansible/Terraform exactly as-is. If the community wants to PR Phase 0/1 (snippets + preset) first, even better. If adoption is weak, we stop there; if it’s strong, we can discuss whether Phase 2+ is worth it. Boundaries intact, friction down.

This is the part I don't get. Any sysadmin worth his salt should be able to setup such a thing. I mean I udnerstand that it might be useful to have something like a template for a docker+portainer vm for homeusers. But to be honest ProxmoxVE isn't the best plattform for people who "just want to run some self-hosted services without becoming a sysadmin". A NAS OS with docker and/or VM support is way more suited for that usecase. UnRaid or other NAS OSes obvivouvsly have less features and flexibility than ProxmoxVE. But imho for most homeusers this flexibility and features are not really needed and the additional complexity of administrating PVE not really worth it.But things that "just work" are less suited for generating clicks and revenues for youtubers so here we are ¯\_(ツ)_/¯

@Johannes S sure, any competent admin can roll a Docker+Portainer VM. The value here isn’t capability, it’s consistency and guardrails. A blessed cloud‑init preset and a one‑click link reduce support noise, shorten onboarding for SMBs and mixed teams, and steer newcomers away from host‑level Docker without turning PVE into a PaaS. AND it tests community interest/participation so it's a win-win for the Proxmox dev team and community.

Think of it like the Ceph/PBS touches PVE already offers: Proxmox stays a hypervisor, but gives a paved road to the tooling most shops actually use. Folks who want Unraid/TrueNAS can absolutely go there; many of us deliberately choose PVE + VMs for isolation, backups, and HA while still living on/handling OCI images.

This proposal serves both camps without bloating core.
 
  • Like
Reactions: scyto
This isn’t gate‑keeping;
i didnt say it was, i used the phrase in a sentence about how we have to be carefull not to dismiss others requirement asks based on our own experiences and expectations, not evey user of proxmox is a sysadmin

tl;dr i was agreeing with you, nice post BTW, thoughtful and engaing
:cool:
 
  • Like
Reactions: verulian
but I’d actually prefer a dedicated vdisk (or NFS/CephFS) for Docker data to keep PBS and migration clean
i found virtioFS surfaces my cephFS easier and more stable than mounting cephFS in the three docker swarm VMs, and i disagree with using NFS for bind mounts, the semantics are even worse and databases will corrupt (its only a question of when, not if) placed on NFS mounts, and the number of premade container solutions that use sqllite etc etc makes me thinking that NFS should not be used in your proposed solution, i am more amenable to a mounted disks in non-clustered docker maybe stored as dedicated vhd or simillar - i personally dislike how opaque those are, but i can see how it would work well for a broad curated experience like you propose
 
Last edited:
Didn't read the whole thread, but why would you want Proxmox VE to run/manage Docker?
Question is, why overcomplicating things?
- Docker/Podman network would/could interfere with PVE network
- Security issues may pop up
- There are some good/free/open source management tools (with WebUI) to manage Docker/Podman around
- Docker/Podman is perfectly running in a KVM VM (and even in an LXC CT)
- Running Docker/Podman in a VM or CT already enables you to use Ceph/ZFS and backups to PBS, using your favourite deployment tools, etc.
- Running Docker/Podman in a VM or CT allows you to use PVEs HA features and still does not limit you in features, your Application Containers offer
- if your run your Dockers in an LXC CT you also have nearly zero overhead compared to a KVM VM (tho you need to be a little more aware of the networking stack if running in LXC)
So why? I'm honestly interested! Why would you like to run your application containers on the host?
* With "you" i address my question tho those, who ask for that feature. My personal vote is against that, if that matters.
 
Last edited:
i found virtioFS surfaces my cephFS easier and more stable than mounting cephFS in the three docker swarm VMs, and i disagree with using NFS for bind mounts, the semantics are even worse and databases will corrupt (its only a question of when, not if) placed on NFS mounts, and the number of premade container solutions that use sqllite etc etc makes me thinking that NFS should not be used in your proposed solution, i am more amenable to a mounted disks in non-clustered docker maybe stored as dedicated vhd or simillar - i personally dislike how opaque those are, but i can see how it would work well for a broad curated experience like you propose

@scyto thanks - appreciate the nudge and the kind words. It does sound like we’re aligned on the goal: keep Docker in a VM, keep PVE clean, give folks a paved on‑ramp. Then let's see where the road goes after that. On storage, your points are solid.

Phase 1 preset: Dedicated vdisk attached to the Container‑Host VM for /var/lib/docker (xfs or ext4; overlay2).
  • If the cluster has Ceph, place that vdisk on RBD via PVE so it’s thin‑provisioned, snapshot‑able, migratable, and backed up by PBS with VM semantics intact.
  • This avoids the DB‑on‑NFS foot‑gun, gives local‑FS semantics that SQLite and friends expect, and keeps HA/backup behavior predictable.
Advanced option (opt‑in, with warnings): virtiofs export from the host (e.g., a CephFS path) into the VM for shared, RWX content only (artifacts, static assets, logs, caches).
  • Clear banner: not captured by VM backups, rely on storage‑level backups; not for databases or write‑ordering sensitive workloads.
  • Your experience tracks mine: virtiofs is convenient and stable for CephFS exposure, but it should be explicit and intentional.
Not a default in the preset:
  • NFS bind mounts from inside the VM. Too many prebuilt images use SQLite or assume strict POSIX semantics; sooner or later, something important lands there and bites you. If someone chooses NFS, it probably should be for read‑mostly content and with eyes open.
I think that storage matrix maybe lets us keep PBS/migration clean by default, while still giving power users your virtiofs path when it fits?
 
  • Like
Reactions: scyto
(I have strong opinions in support of Docker not being a part of Proxmox, and I once wrote a very long post about why I thougtht that was the best approach. See: https://forum.proxmox.com/threads/docker-support-in-proxmox.27474/post-696840 and briefly https://forum.proxmox.com/threads/docker-support-in-proxmox.27474/post-764962 . Adding to those, if you're going to support built-in Docker, you should also be supporting built-in Podman. Unifi's new containerized self-hosted management product (replacing the old self-hosted Unifi Network application) requires Podman; I'm sure it's not the only thing. And at some point when some third thing becomes an industry-standard containerization engine/platform, the same argument could be made for supporting it. That's a bottomless rabbit hole, I think.)

After reading this thread, I think the original proposal from @verulian (as I understood it) assumes that the Proxmox team possibly has or will have interest in supporting end-users Docker environments for free--unless the Docker feature is locked behind a support contract. (Because if there's an officially blessed way to do Docker, then users will start expecting Proxmox (the entity) to support them when they try to do Docker.)

I'd be willing to bet money that they do not have any interest in supporting an infinite range of possible Docker configurations (because people will customize the standard template, every time), and I can't blame them for that decision. The mod and dev teams in the forums here already go above and beyond helping people with their customized Proxmox VE/PBS configs and edge case deployments for a software stack they develop and control.

They don't develop or control Docker, and I don't think it's reasonable to expect them to be able to provide the same level of support for Docker as they do their own products, even if they wanted to. And shipping a major software component that you cannot feasibly fully support seems ... bad.

The only official statement on Docker from Proxmox that I'm aware of is the recommendation against installing it in an LXC, and the recommendation to use a VM instead.
 
Didn't read the whole thread, but why would you want Proxmox VE to run/manage Docker?
Question is, why overcomplicating things?
- Docker/Podman network would/could interfere with PVE network
- Security issues may pop up
- There are some good/free/open source management tools (with WebUI) to manage Docker/Podman around
- Docker/Podman is perfectly running in a KVM VM (and even in an LXC CT)
- Running Docker/Podman in a VM or CT already enables you to use Ceph/ZFS and backups to PBS, using your favourite deployment tools, etc.
- Running Docker/Podman in a VM or CT allows you to use PVEs HA features and still does not limit you in features, your Application Containers offer
- if your run your Dockers in an LXC CT you also have nearly zero overhead compared to a KVM VM (tho you need to be a little more aware of the networking stack if running in LXC)
So why? I'm honestly interested! Why would you like to run your application containers on the host?
* With "you" i address my question tho those, who ask for that feature. My personal vote is against that, if that matters.

@flames TL;DR? Short version: not asking Proxmox to run/manage Docker on the host. Agree that host‑level Docker tangles networking and security, and there are already great tools (Portainer, K8s, etc.).

The ask is a tiny MVP, purely as an on‑ramp (for now) to clean this situation up, reduce confusion and set some sane conventions for the use of Proxmox in this context:
  • An official Container‑Host VM preset (cloud‑init + containerd/Docker, sane cgroup/overlay2 defaults).
  • A "don’t install Docker on the node" guardrail with a one‑click path to create that VM.
  • An optional External Manager URL field on the VM that renders an "Open in Portainer" button (the manager lives inside the VM).
That’s it. No container UI in PVE, no compose parsing, no guest networking tweaks, no scheduling. Containers stay inside VMs (or unprivileged LXC if someone insists). You still get Ceph/ZFS, PBS backups, PVE HA, and whatever orchestration you like - exactly as you said - just with a paved, safe path that cuts down the endless "I installed Docker on the node and nuked iptables" threads, etc. We're then left boundaries intact; friction down and we can figure out if we're going anywhere after that.
 
  • Like
Reactions: scyto
  • An official Container‑Host VM preset (cloud‑init + containerd/Docker, sane cgroup/overlay2 defaults).

interesting, but then people want official templates and images for everything.
Proxmox offers official templates for their own products, like Proxmox Mail Gateway. they really should not add "Proxmox official" tags for third party products. they would need to support that then. no, really.
the third party produrct vendors should offer easy-to-use-with-proxmox templates, as some already do. there are also many turn-key templates etc, that are not PVE specific, but just work.

  • A "don’t install Docker on the node" guardrail with a one‑click path to create that VM.

uhm, not easy. you want portainer with docker, i want dockge with podman, someone else wants something else, and so on.
"don't install Docker on the node" is documented. adding a warning in the GUI about that is also "strange", you can't list all "do nots". people do install sh*t on the node (looking at my self in the past).

  • An optional External Manager URL field on the VM that renders an "Open in Portainer" button (the manager lives inside the VM).

here i would propose a feature, an easy hook to add custom links/buttons to the PVE GUI. so it persists after PVE updates. like custom-menu.conf editable in the GUI (per cluster, per node, per vm/ct)
 
Last edited:
interesting, but then people want official containers for everything.
Proxmox offers official templates for their own products, like Proxmox Mail Gateway. they really should not add "Proxmox official" templates for third party products.
maybe the third party produrct vendors should offer easy-to-use-with-proxmox templates, as some already do


uhm, not easy. you want portainer with docker, i want dockge with podman, someone else wants something else, and so on.
"don't install Docker" is documented. adding a warning in the GUI about that is also "strange", you can't list all "do nots"


here i would propose a feature, an easy hook, to add custom links/buttons to the PVE GUI. so it persists PVE updates

@flames fair points. To avoid the "official template for every product" slope, the Container‑Host VM I’m asking for wouldn’t bundle any third‑party app at all. Make it a plain Debian‑based preset (PVE’s native lineage), with cloud‑init that toggles a runtime choice at create time: Podman or containerd from Debian by default, and an optional switch to install Docker CE (if the admin wants it). That keeps Proxmox neutral, avoids endorsing specific vendors, and still gives a paved, known‑good path.

On the guardrail: agreed we can’t list every "do not." This one is unusually high‑impact and common, though, so a contextual warning only when dockerd is detected on the host seems reasonable: "Host‑level Docker is unsupported; create a Container‑Host VM instead." No pop‑ups otherwise.

And I really like your idea of a generic hook for custom links/buttons in the UI - cluster‑wide and upgrade‑safe. That could maybe even better than a hardcoded "Open in Portainer." Call it External Integrations: per‑VM (and maybe per‑node/cluster) we can store label+URL pairs and render them as buttons. Perhaps have a default function a la previous discussion. Then anyone can point to Portainer, Dockge with Podman, K8s Dashboard, Grafana, whatever they run inside the VM - no PVE ownership of those tools.

So the MVP shape becomes:
  • Debian Container‑Host preset (cloud‑init; choose Podman/containerd by default, Docker CE optional).
  • Host‑Docker detection → one inline warning with a "create container‑host VM" link.
  • Generic External Integrations hook for custom buttons/links, upgrade‑safe.
Keeps Proxmox neutral, keeps layers clean, and gives folks a safe on‑ramp without turning PVE into a PaaS.
 
  • Like
Reactions: scyto
  • Debian Container‑Host preset (cloud‑init; choose Podman/containerd by default, Docker CE optional).

again, this needs to be maintained and supported. those debian templates are not maintained by the Proxmox team AFAIK. also someone wants ubuntu, arch, suse (god beware)
but still, i understand your point. maybe a link to RTFM, where some suggestions could be proposed in the wiki, when that does no big extra work.


  • Host‑Docker detection → one inline warning with a "create container‑host VM" link.

it is not only docker. but detecting some of well known packages and repositories, that do conflict with PVE, and give a warning is a good idea.
additionally "apt-mark hold packagename", when still overriden by unhold, warning in GUI.
 
@flames fair points. To avoid the "official template for every product" slope, the Container‑Host VM I’m asking for wouldn’t bundle any third‑party app at all. Make it a plain Debian‑based preset (PVE’s native lineage), with cloud‑init that toggles a runtime choice at create time: Podman or containerd from Debian by default, and an optional switch to install Docker CE (if the admin wants it). That keeps Proxmox neutral, avoids endorsing specific vendors, and still gives a paved, known‑good path.

On the guardrail: agreed we can’t list every "do not." This one is unusually high‑impact and common, though, so a contextual warning only when dockerd is detected on the host seems reasonable: "Host‑level Docker is unsupported; create a Container‑Host VM instead." No pop‑ups otherwise.

And I really like your idea of a generic hook for custom links/buttons in the UI - cluster‑wide and upgrade‑safe. That could maybe even better than a hardcoded "Open in Portainer." Call it External Integrations: per‑VM (and maybe per‑node/cluster) we can store label+URL pairs and render them as buttons. Perhaps have a default function a la previous discussion. Then anyone can point to Portainer, Dockge with Podman, K8s Dashboard, Grafana, whatever they run inside the VM - no PVE ownership of those tools.

So the MVP shape becomes:
  • Debian Container‑Host preset (cloud‑init; choose Podman/containerd by default, Docker CE optional).
  • Host‑Docker detection → one inline warning with a "create container‑host VM" link.
  • Generic External Integrations hook for custom buttons/links, upgrade‑safe.
Keeps Proxmox neutral, keeps layers clean, and gives folks a safe on‑ramp without turning PVE into a PaaS.
After catching up, I'm curious why this needs to be a Proxmox-specific VM template at this time?

Specifically, this project is still in the very early planning stages: what should be in the VM template, how should the initial install script work, what options should it provide, etc. There was also earlier talk up-thread about doing cgroups right and some other bits that would be very welcome to standrdize with documention.

It seems like this would be a prime candidate for a more generic GitHub-based project for providing a great starting point for a VM template for a Docker host VM, even if at first the repo is only an engineering requirements specification listing a goal and what needs to happen to accomplish it. That document should evolve to give an idea of what a feature complete template environment (with deployment/setup/first boot script) would look like.

Even if that repo never shipped a single actual template, you'd end up having created a fantastic guide to creating a VM/host template usable as a Docker host with sane defaults. That would be a great service to the entire community. I've been using Docker and Podman in home server environments for years and still would love to see a community-reviewed reference like this.

Helping people learn is, I think, much more important than just giving them a button to click to auto-deploy something--though that certainly has its place as well.

As far as what implementation might look like, have you seen DietPi ( https://dietpi.com/ )? It's an extremely lightweight OS targeted at systematic deployment of various vetted containerized apps on limited hardware (it was originally targeted at Pis, but now has maintained versions for various x86 devices as well, including a prepared Proxmox-compatible VM template:
https://dietpi.com/docs/install/#how-to-install-dietpi-proxmox

I think a similar approach to deploying this hypoethetical safe, sane, but mostly bare containerization VM could be an interesting place to start.

And I really like your idea of a generic hook for custom links/buttons in the UI - cluster‑wide and upgrade‑safe. That could maybe even better than a hardcoded "Open in Portainer." Call it External Integrations: per‑VM (and maybe per‑node/cluster) we can store label+URL pairs and render them as buttons.

You might consider opening a feature request over on the Proxmox Bugzilla for adding this to the Proxmox API (this would be a long-term project that would involve working closely with the PVE devs, if they have the resources to examine this right now; you'd need to do the dev work yourself with their support). Something along the lines of a "Custom Integration Button UI" API that would enable devs to write a Proxmox plugin to add a button to a specified place in the GUI. Not just for jumping to a specific VM or LXC web UI, but also for entities that want to add a link to their internal documentation wiki or whatever that has the their policies and procedures for deploying and maintaining PVE nodes, VMs, and LXCs, for example. (A feature request should include as many broadly applicable use cases as possible to help the PVE devs prioritize things, I think.)

In the alternative, you could ask for a user-facing, non-API feature to add some small number of custom buttons to the UI up near the reboot/shut down buttons. This would need to be planned and implemented by the PVE dev team so as not to allow a user to break the UI by adding 50 buttons, for example.
 
Last edited:
btw. there is an old topic by a well known and helpful user Dunuin, who suggests a more interesting approach:
 
i understand why you say that and it would also imply that all the community scripts are not needed, no one should use images on dockerhub made by others and every sysadmin worth their salt should compile proxmox and the kernel from source for every install (that last bit i did have to do one time, lol)


I wouldn't go that far that everybody should compile everything by themselves. There is a reason why I use Debian stable whenever I can instead of Gentoo :D I'm more like: If you want an easy way to spin up a service without needing to know the more technical bits this is a solved problem with docker-compose or podman, combined with something like portainer or dockge or your NAS WebUI docker panel if you prefer an UI. The community-scripts are an impressive piece of work but quite tragically one not needed imho since we have already docker. I mean onat work we have some self-written shellscript in our VM template for new Linux VMs which does the initalization at first boot. It does the job but we are at point where we basically have reinvented cloud-init just with extra-steps, less features and more problems. So I will get rid of it as soon as possible and replace it with the standard tool for the job (cloud-init). So my gate-keeping is not about gatekeeping at all, I love that people get empowered to do things on old hardware without needing expensive cloud subscriptions or vendor lockin. But for me empowerment means getting to know how to do things and why you do things not learning how to use a hammer and a nail for everything.

So my approach for empowering people (which is really a good thing, I don't want gatekeeping!) is more with @SInisterPisces :

Even if that repo never shipped a single actual template, you'd end up having created a fantastic guide to creating a VM/host template usable as a Docker host with sane defaults. That would be a great service to the entire community. I've been using Docker and Podman in home server environments for years and still would love to see a community-reviewed reference like this.

Helping people learn is, I think, much more important than just giving them a button to click to auto-deploy something--though that certainly has its place as well.

now if you are talking about a business sysadmin - i am 100% with you ;-)

I actually meant business sysadmins or devops team since @verulian claimed that docker support would be needed for " shops that live on OCI images, CI/CD, and modern app delivery." Now call me an old, snarky UNIX admin but imho if you are such a shop you should also be able to pay someone for setting up the needed infrastructure on a VM. For homeusers I personally think that large parts of the typical "best-practices" in r/proxmox or r/homelab are aweful but if it works for them I'm fine with it. My impression however is that it actually doesn't although they don't want to hear it :)

and as for YT'ers, yes their unraid and truenas videos are as awfull and wrong as their proxmox ones (with a few notable exceptions)
I would never expect something different :) But imho (might be a bit to optimistic) Unraid or some other homeusers NAS (like the ones by Synology and co) are easier to setup so people are less likely in needing the more ill-advised bits from the all-knowing Trash Heap. TrueNAS is a different beast since as a NAS it's obviovusly aimed at enterprise level while their application and vm hosting is more aimed at homelabs and smbs. It's also way to fluid for my taste: I mean they went from docker to kubernetes and back to docker on scale in the last few years and from their old VM support to incus and back.
Thankfully the main reason to run TrueNAS is the NAS functionality and that part is rock-solid and less prone to such erratic developments.

So I don't get what IxSystems is trying to achieve and prefer Proxmox more conserative approach in overhauling architecture and introducing new features :)
 
Last edited:
  • Like
Reactions: flames and scyto
So I don't get what IxSystems is trying to achieve
you and me both

if there was a NAS (storage not apps) 'addon' that had the same STORAGE feature set and robust and easy UI ZFS management i would be all-in on proxmox. the community LXC scripts, he sort of can use cockpit but not all parts approach isn't something i want to bother with or maintain, and trust me i tried several different approaches and in the end dragged myself kicking and screaming into the viurtualize truenas on proxmox, sigh. i even bought a poolsman license (its good but dangerous because of cockpit in general being dangerous on proxmox)

i actually got to do a feedback session with their product team, was interesting what they askes, i think they are finally realizing they need to decide what they want to be when the grow-up - the mess they made of vms / incus and series of about turns makes me wonder WTF is going on in inside ix-systems truenas and who is running the show.... seems to be the lunatics running. the asylum at the moment with no overall strategic BHAG for them all to align to.... or leaders and devs are at 'war' with each other to some degree....

tl;dr i am fine having proxmox as my hypervisor and truenas as my storage platform for shares/smb/backuptargets/etc
 
  • Like
Reactions: flames