Migrating server with 30 Docker images to PVE

Jan 20, 2022
40
6
13
24
I got a server which is currently running ~30 docker images. Services like Nextcloud, Akkoma, Grafana, Keycloak, etc. with their associated databases with 11TB BTRFS raid storage.
Now, this server has got plenty of resources left so I am wondering if I should migrate the current setup to PVE for easier administration of future VMs and Containers.

What would be best practice when it comes to the existing docker images? One VM, multiple VMs with the storage mapped into the VM? Or rather keep the docker images at the host PVE/Debian level, similar to the current setup?

Or maybe this is a bad idea in the first place? :)
 
I am not a (heavy) docker user, so my understanding for sure is biased:
  • LXC (and Docker on a host) share a single kernel --> Containers are not really isolated from each other
  • VMs run their own kernel and can see only the virtualized hardware --> as good as isolated as possible (without using dedicated hardware)
So if there are independent customers or services with different security requirements a VM is recommended.

If I would be forced to run some dozen Dockers I would go for a few/some/several VMs and put them in there. Possibly I would look into tools like k3s for another level of abstraction. There are downsides of course, the worst: you can not backup/snapshot a single Docker inside of such a VM. For this you would need additional VM-internal tools. The extreme approach would be "one Docker --> one VM" which seems absolutely strange...

Just my 2 €¢ - from a 99%-VM-user .
 
I am not a (heavy) docker user, so my understanding for sure is biased:
  • LXC (and Docker on a host) share a single kernel --> Containers are not really isolated from each other
  • VMs run their own kernel and can see only the virtualized hardware --> as good as isolated as possible (without using dedicated hardware)
So if there are independent customers or services with different security requirements a VM is recommended.

If I would be forced to run some dozen Dockers I would go for a few/some/several VMs and put them in there. Possibly I would look into tools like k3s for another level of abstraction. There are downsides of course, the worst: you can not backup/snapshot a single Docker inside of such a VM. For this you would need additional VM-internal tools. The extreme approach would be "one Docker --> one VM" which seems absolutely strange...

Just my 2 €¢ - from a 99%-VM-user .

Good input, thanks! From an isolation point of view I think LXC would surely not be worse than today, because right now there is no isolation at all given that all 30 images are running on the same physical server.
 
As with all things regarding setups, the answer is: It depends (on your needs).

I've personally been wondering about the same thing, as I have a Debian host at home which runs a few Docker containers. All nice and fine, but PVE does make things incredibly easy and, as @t0mz said, VMs provide better isolation. Docker also does funky things with iptables and can sometimes become rather unpredictable in my opinion, but that's beside the point.

Here's a non-exhaustive list of ideas, maybe these may help you brainstorm:
  • Do you have certain groups of containers that can exist independently of one another? E.g. Nextcloud with a Postgres DB and a separate Nextcloud container for cron jobs. You could throw these groups into separate VMs and give them the resources that they need.
  • If you have a reverse proxy like Traefik or Caddy, you'll need to figure out how to set it up in such a way it "knows" about your containers in your VMs (especially if your reverse proxy itself is containerized).
    • You can also have multiple reverse proxies, e.g. one for each group of containers where multiple web-facing services are sitting. In my opinion, it's not that much of an additional overhead and might make things more convenient, but I haven't actually tried that yet.
  • One thing to note is that PVE integrates seamlessly with PBS, so backing up your containers' storage might become much, much easier and convenient.
  • Even slapping all containers into one bigger VM would allow you to make backups of rather easily.
  • For what it's worth, maybe it helps if you made a little diagram of your current Docker architecture, with all of your containers, networks, etc.? Maybe that would give you some ideas.
  • If you have some spare hardware, you could also just try out PVE + Docker on that; see "how it feels" and experiment a little. That way you don't get trapped in analysis paralysis.
So, I think that virtualizing Docker containers comes with a lot of benefits, but it really all boils down to how you want to virtualize them all. There's not really a definitive answer here.

Hope all of these little points help you in some way though - if anyone has any other ideas or opinions, I'd be curious to hear them! Maybe there's some special secret way I've been oblivious about all along ;)
 
Thanks a lot Max, all good points.
The logical grouping is already happening due to using docker compose, and yes, there's a reverse proxy (nginx) in play, but that one may move to OPNsense as part of this realignment.
I got the management side (backups/updates/etc.) of the docker services and data pretty much under control, so the main benefit of using PVE I'd expect to come from easier handling of virtual machines and containers.

But the most important point for me so for is that no one has come back with saying "never install docker next to PVE because…" or "never install docker in a VM because…", which means I could just change the OS from Arch to Debian, install PVS and run all scripts/services on Debian exactly the way I did before. Given that my filesystems are all btrfs I can probably even dual boot for as long as needed.
And if I later decide to move some things into VMs or containers that will be optimizing rather than "must do's"
 
I would refrain from running Docker side-by side with PVE, since it certainly can lead to issues (for instance the iptables stuff Max mentioned, but also other things). I would strongly suggest running Docker in a separate VM. Furthermore, I would also not run Docker in an LXC container, even though there are some tutorials out there on how to do it - usually people run into problems sooner or later.
 
The only recommended way of running Docker on PVE is inside a VM:
If you want to run application containers, for example, Docker images, it is recommended that you run them inside a Proxmox Qemu VM.
https://pve.proxmox.com/wiki/Linux_Container

  • Docker directly on the PVE-host can mess up the network of the host; up to making it unreachable.
  • Docker in a LXC can lead to all sort of problems; if not now, possibly later. Starting from problems with AppArmor, over problems with the Docker storage driver and what not...
Search the forum/net for all the problems in this regard.
 
  • Like
Reactions: Max Carrara

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!