my (slow, but steady ) progress into the world of Proxmox as my home server brings up my next question:
Now that I have my first container set up, how many services do I pack into it?
Is it best practice to separate everything (i.e. one app per LXC / VM), or does it make sense to combine several services?
For example: A container that serves as a fileserver, and a Jellyfin mediaserver, which serves files that are located "on" the fileserver.
Is there a benefit to keeping them separate (other than general added security by compartmentalization, of course) or to putting then into the same container with different ports?
You will get different (but valid!) answers to this question.
But my decision is: one service == one VM!
This especially means that I do separate docker containers in a singleVM each. Yes, this seems to waste resources. But I like it this way - and one Docker container can not interfere another one...
This heavily depends on your applications, usecases and used Hardware.
For example running docker inside lxcs us known to break from time to time ( e.g. after updates) so any docker application should be run in a vm. But if you have limited RAM it makes sense to run all docker containers from the same vm. But then they will be less isolated.
Any vm or lxc will need it's own housekeeping ( updates etc) but will also provide better isolation and ( depending on your setup) better network Integration.
And if you happen to use a GPU for transcoding ( e.g. for immich or jellyfin) it might turn out, that you need to run them from the same vm ( since most gpus can only be assigned to one vm ) or as lxcs ( since they can share a gpu)
Thank you both for your replies.
So it looks like "pick your poison", based on hardware and requirements is the way to go and there is no "one size fits all".
Which is totally OK for me, as I can then decide on a case-by-case basis what makes most sense for me. And if that forces me to give each setup decision a little more thought, well, that can't be wrong anyway.
For LXC I would only run one application to keep it small and isolated as much as possible. As @UdoB mentioned: you can keep this set-up also with VMs. I personally don’t run a VM for only one docker container. One exception are really mission critical containers/stacks like mailservers/groupwares (mailcow for example) where chances are high that complex stacks collide with other stacks not only in question of ports but also of ressources. If one stack goes wild and allocates to much ressources other stacks will perform bad or get unstable. I have a PVE testing cluster consisting of 7 hosts (beside my „production net“) at my homelab and most of my VMs are Debian or Alpine based with a total of 414 containers in 22 VMs. If you keep your containerization straight and don’t overallocate your VMs ressources you should be on the safe side.
It’s like other decisions of how safe and reliable your network should be. For example: do I want to virtualize my Firewall/NGF? If the PVE fails the whole network fails. But that can also be the case with a bare-metal FW. Do I put services/apps like DNS, DHCP, etc. in one LXC/VM? If that fails, same scenario as for the FW.
You won’t get THAT one right answer (like @UdoB said) as everybody has it’s own philosophy in questions of ressource allocation, fall-back strategy, etc.