Docker on LXC or VM?

n1nj4888

Active Member
Jan 13, 2019
162
20
38
44
I'm considering running docker (and some docker containers) off of proxmox and wondered whether running docker inside an LXC or a VM would be better? I noticed from the following URL that it mentions that running docker in a VM would be better but gives no reason why this would be the case?

https://pve.proxmox.com/wiki/Linux_Container

"If you want to run micro-containers (with docker, rkt, …), it is best to run them inside a VM."

Also, for LXC more generally and likely a noob question but ...If LXC uses the same kernel as the proxmox host, I'm not sure I understand how there are Ubuntu 18.04.1 (no 18.04.2 template yet?) and Ubuntu 18.10 templates when these use the 4.18.0-16-generic (HWE) kernel? Wouldn't there be issues with the older proxmox kernel (4.15.18-10-pve) running these OS?

Thanks!
 
noticed from the following URL that it mentions that running docker in a VM would be better but gives no reason why this would be the case?
docker uses kernel features for encapsulating containers (like lxc) so nesting this is not that easy, but doable when you acitvate the 'nesting' feature (Container->options->features)

I'm not sure I understand how there are Ubuntu 18.04.1 (no 18.04.2 template yet?) and Ubuntu 18.10 templates when these use the 4.18.0-16-generic (HWE) kernel?
those templates do not load their own kernel but use the host kernel, so you have an ubuntu 18.04 but with the pve kernel

Wouldn't there be issues with the older proxmox kernel (4.15.18-10-pve) running these OS?
in general, no there should not be any issues
 
docker uses kernel features for encapsulating containers (like lxc) so nesting this is not that easy, but doable when you acitvate the 'nesting' feature (Container->options->features)

Thanks for this - Is there any documentation on this nesting feature and what it does / how? I'm interested since I would have thought it standard to run docker inside an LXC container with the docker instance (and its containers) being transparent to the host i.e. as if the were in a VM?
 
I'm interested since I would have thought it standard to run docker inside an LXC container with the docker instance (and its containers) being transparent to the host i.e. as if the were in a VM?

No, LXC is lightweight virtualization. If you're going to run Docker in a production setup, use a VM. Also best do use a Docker-centric distribution like RancherOS for orchestration. You will have a hard time to get Rancher to work in LXC, if at all.

Docker is a PaaP system that runs best on a IaaS solution like PVE if you use VM, so "real" virtualization.
 
No, LXC is lightweight virtualization. If you're going to run Docker in a production setup, use a VM. Also best do use a Docker-centric distribution like RancherOS for orchestration. You will have a hard time to get Rancher to work in LXC, if at all.

Docker is a PaaP system that runs best on a IaaS solution like PVE if you use VM, so "real" virtualization.

So far in my small scale tests of production setups, RancherOS has been more of a pain than an asset. You would be better off using your favorite Linux OS (Debian or CentOS, for example), then just add your target docker version on top of it. I mostly use Ansible to administrate my VMs and RancherOS is not exactly well equipped for such automated administration. And the main point of Docker is to have a boring and stable OS on which you can install whatever container you want. On Proxmox 5.4 with qemu-kvm, RancherOS does not really work out of the box. Default install has Proxmox cloud-init not working, I had to craft a qemu-guest-agent docker image, and after 6+ hours I was not able to fix RancherOS sluggish startup (+10 minutes) where any debian stretch with Docker boots in seconds. Oh and RancherOS documentation is lacking.
 
Last edited:
  • Like
Reactions: SimonMcNair
You would be better off using your favorite Linux OS (Debian or CentOS, for example), then just add your target docker version on top of it.

That's totally true, if you're deep into Linux. I also do it this way for myself.

RancherOS has advantages for deployment with docker-machine, so it is fit for mass deployment for people not so familiar with Linux and Docker, but want to have a good tool integration.
 
So far in my small scale tests of production setups, RancherOS has been more of a pain than an asset. You would be better off using your favorite Linux OS (Debian or CentOS, for example), then just add your target docker version on top of it. I mostly use Ansible to administrate my VMs and RancherOS is not exactly well equipped for such automated administration.

I'm interested to hear about your experiences. I've been sandboxing k8s clusters using rancheros for the building blocks; what challenges have you run into? what are the limitations of using ansible on the endnodes? (not at that stage yet but as you can preload the keys at deployment I wasn't expecting any challenges at all)
 
I'm interested to hear about your experiences. I've been sandboxing k8s clusters using rancheros for the building blocks; what challenges have you run into? what are the limitations of using ansible on the endnodes? (not at that stage yet but as you can preload the keys at deployment I wasn't expecting any challenges at all)
Promox/Qemu integration is quite bad. Like I said, to get the IPs of the VM displayed in Proxmox, I had to deploy a docker container with --network host. But then, shutting down the VM became impossible as a docker container cannot trigger a host shutdown. So either I cannot shut down easily my VM, either I have to assign manually fixed IP to the VMs, which is unusable. And frankly, RancherOS documentation is terrible, so I did not try to make a magical, console-like container to run on the system docker (it seems to be possible, but complicated). Then Ansible defaulted to not working (needs python interpreter in the console), unless you switch to Debian or Ubuntu console, or do some crazy Ansible stuff (run ansible command in a container, tried, was awfully complicated). And I had to tweak cloud-init configuration in the installation to get cloud-init to work, but I had to reverse engineer the openstack cow2 image to figure out the proper configuration. Then I wanted to add a second virtual hard drive to the VM, for docker volume storage and I did hit a wall as was is just awful to automate with Ansible.
And now that K3Os is out, I think that the RancherOS project will soon become obsolete, once that high-availability is fixed.
But it seems like a widespread issue in docker-first OSes:
  • CentOS Atomic has no real documentation about how you can tweak the OS. Spent 2 hours reading obsolete tutorial about non-existing or renamed or deprecated packages.
  • Free CoreOS had almost zero documentation and I has become even worse since it was bought by Red Hat. Many happy proof of concept around on the web, but it looks like paying customers either keep quiet or do not exist.
  • Flat Car Linux (CoreOS fork), I dunno. I still cannot see how I could integrate with qemu-guest-agent and it looks like I am going to hit the same wall as rancheros
So, so far, what I did is create a Debian KVM template with qemu-guest-agent. It just works fine, can be easily administrated, updated. It can be tweaked to be light (delete man documentation, for example), like in the debian:stretch-slim docker image. And I could run hundred of these with no real effort or could tighten up security with some more work.
So far, I still do not see the point in those Docker-first OSes.
 
  • Like
Reactions: Gilles
Promox/Qemu integration is quite bad. Like I said, to get the IPs of the VM displayed in Proxmox, I had to deploy a docker container with --network host to get the VM ip on Proxmox
Ahhh I understand. use this instead of the vanilla image: https://github.com/rancher/os/releases/download/v1.5.1/rancheros-proxmoxve.iso

it has proxmox compatible qemu tools preinstalled.

Then Ansible defaulted to not working (needs python interpreter in the console), unless you switch to Debian or Ubuntu console, or do some crazy Ansible stuff (run ansible command in a container). And I had to tweak cloud-init configuration in the installation to get cloud-init to work, but I had to reverse engineer the openstack cow2 image to figure out the proper configuration.
I imagine installing a python binary would be simple enough but you're right that there is no package management integrated. As my intention is to use kubectl and not ansible this doesnt pose a problem for me, but its probably solvable. The base OS for Rancher is debian/ubuntu.

And now that K3Os is out, I think that the RancherOS project will soon become obsolete, once that high-availability is fixed.
true enough, I am planning to reroll my lab with it. Since I already had the nodes prejoin a K8s cluster via cloudconfig it will serve the same purpose, just simplify it one step. The biggest challenge was to have unique names assigned at deployment but I gather this will no longer be an issue.
So far, I still do not see the point in those Docker-first OSes.
smaller, faster, effectively stateless, require no upkeep.
 
This one link is god-sent, thank you!
smaller, faster, effectively stateless, require no upkeep.
About the upkeep part, as you need to purge old docker images and obsolete volumes, you will need to clean the stateful part, and you will accumulate these, that is not true. As I have yet to use Kubernetes, perhaps this feature is already provided.
And the bad part is there no real community support in case anything goes wrong. It is a niche OS.

I will probably evaluate docker-machine-driver-proxmox-ve once I have audited its code; It could be a neat solution on a dedicated LVM volume. Though I do not not know if docker-machine is really production-ready.

FYI, I do use Docker in my small production servers, both on Openstack and on Proxmox. I did not use Kubernetes because it was too expensive on small scale installations before K3S happened. These are not missing-critical as my beloved VMs hosting various services on Docker, though I like these services to be rock-solid, and easily restored in case anything goes wrong. And there is actually no maintenance as Debian unattended upgrades do just work.
 
Last edited:
Though I do not not know if docker-machine is really production-ready.

You can use it to do bare-metal installations, but unfortunately requires RancherOS in the patched version @alexskysilk provided. We also tried the Boot2Docker base image, but that also required patching which was refused upstream. I don't see the point in providing maintenance for stuff I do not use and especially providing install images for third party OSes.

I have to say, for my K8s test deployments, I also used k8s on Debian Stretch, because of the easy integration in preexisting automation environment. Auto-provisioning via netboot in PVE is also very easy, so most of the benefits of these Docker-first OSes are bogus for a fully-automated Debian environment.
 
So far in my small scale tests of production setups, you would be better off using your favorite Linux OS (Debian or CentOS, for example), then just add your target docker version on top of it.

Is Ubuntu also fine compared to Debian?
Or are there any advantages for using one of them for a small productive environment (xmpp-server, fediverse-server etc.)?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!