Docker support in Proxmox

Norman Uittenbogaart

Active Member
Feb 28, 2012
150
5
38
Rotterdam, Netherlands, Netherlands
What for? Clear container already use qemu, so why don't you use a simply VM instead?
There is a definite plus to docker compared to normal container/vm technology.
Compared to normal package management, you can take a docker storage and move it to any server and install a new container and its up & running with the latest versions.
Reinstalling a server is just taking the container down pulling a new version in and starting back up.
This is not as easily done with a VM / container.
 

manu

Proxmox Staff Member
Mar 3, 2015
806
64
28
As LnxBill said previously, PVE provides IAAS, which mean the end unit you manage with PVE are *virtualized OSs*, with a state.
With Docker the end unit you manage is *a stateless applications* which is a different scope.
Both LXC and Docker are both called *Containers* because they use the same Linux Kernel features (cgroups) but for a difference purpose,

Thus the proper way to use to get the power of both is do install a docker swarm on top of a group of VM.
Then using PVE features ( cluster manager, ceph, etc ... ) your docker hosts are higly available, and using the docker daemon running inside your VMs, your stateless applications are easily deployable.

In fact I presume most if not all of the big docker deployments are taking place on top of VMs, which is the approach recommended by
Docker themselves with their Docker Machine https://docs.docker.com/machine/overview/
 
  • Like
Reactions: Sidiox and mhubig

dlasher

Member
Mar 23, 2011
108
6
18
interesting addition to this discussion :https://github.com/gotoz/runq/

---------------

runq
runq is a hypervisor-based Docker runtime based on runc to run regular Docker images in a lightweight KVM/Qemu virtual machine. The focus is on solving real problems, not on number of features.

Key differences to other hypervisor-based runtimes:

  • minimalistic design, small code base
  • no modification to existing Docker tools (dockerd, containerd, runc...)
  • coexistence of runq containers and regular runc containers
  • no extra state outside of Docker (no libvirt, no changes to /var/run/...)
  • simple init daemon, no systemd, no busybox
  • no custom guest kernel or custom qemu needed
  • runs on x86_64 and s390x
 

JOduMonT

Active Member
Jan 20, 2016
35
2
28
Bangkok
jdumont.consulting
while obviously docker democratize the containerization saying
containerization is replacing the hypervisor
is like saying processes will replace applications

1. Docker could be nest inside LXC and KVM
2. Promox propose unprivileged container, while it is also possible with docker, most of users simply forget to isolate their container via remap.

I would love to see Proxmox going deeper with LXC and proposing LXD instead of KVM.
 
  • Like
Reactions: guletz

guletz

Renowned Member
Apr 19, 2017
1,271
190
68
Brasov, Romania
Hi,

Lxc, lxd and VM/kvm have different use cases. Yes many of us will want to have all of this under PMX, but I do not see in the next 5 years that lxd(or whatever else) could remove the kvm landscape entirely!
 
  • Like
Reactions: JOduMonT

JOduMonT

Active Member
Jan 20, 2016
35
2
28
Bangkok
jdumont.consulting
Now with the nested feature it is very easy to run Docker inside an unprivileged LXC Container which make Docker which make the host a little bit more secure than simply running Docker directly on the host itself.
1579681774973.png
 

MadalinC

New Member
Jan 28, 2020
14
0
1
Even with Nesting and keyctl on, I cannot seem to get Docker in LXC to properly register ports to use.

It's really annoying as with LXC we can have a much smaller overhead (especially RAM usage) and being Unprivileged by default, it's great to isolate some of the docker socket issues.

Did anyone manage to actually get Docker to work (i.e. publish ports--80 tpc for ex.) inside LXC?
 
Sep 8, 2019
9
6
3
60
Did anyone manage to actually get Docker to work (i.e. publish ports--80 tpc for ex.) inside LXC?
Yes. I tried this right now. Something along the lines of this:

* Click on CephFS filesystem (you can also use local and probably others).

* Hit Templates.

* Add Debian Buster 10.

* Hit Create CT

* Add using the Debian Buster template added above.

* Under the new LXC, go to Options --> Features and check "keyctl" and "Nesting".

* Boot up that LXC template, and get on the shell via ssh.

* Then run this inside the new LXC:

Code:
# update image
apt update
apt dist-upgrade
reboot
# Boot back up and login
# Install upstream Docker, don't use Debian's:
apt install apt-transport-https ca-certificates curl gnupg2 software-properties-common
curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
apt update
apt install  docker-ce docker-ce-cli containerd.io
# Install a docker container, whatever you're using, I happen to want to use tensorflow:
docker pull tensorflow/serving
# Run docker container. Note this will complain you don't have a model if you use tensorflow above, which is ok.
docker run -p 80:80 -t tensorflow/serving
# Enter docker container. Use another window since the other is running above:
# (Note, i'm sure there's a proper way to do this, I don't use docker much):
docker exec -u 0 -it `docker ps|grep -v CONTAINER | head -1 | cut -d " " -f 1` bash
# Install toys inside docker:
apt install apache2 net-tools
echo foo >/var/www/html/index.html
* Then on your workstation and/or the docker hit port 80 of your LXC IP and it should give you apache from inside the docker.

Happy hacking,

-Jeff
 
  • Like
Reactions: MadalinC

MadalinC

New Member
Jan 28, 2020
14
0
1
Yes. I tried this right now. Something along the lines of this:

* Click on CephFS filesystem (you can also use local and probably others).

* Hit Templates.

* Add Debian Buster 10.

* Hit Create CT

* Add using the Debian Buster template added above.

* Under the new LXC, go to Options --> Features and check "keyctl" and "Nesting".

* Boot up that LXC template, and get on the shell via ssh.

* Then run this inside the new LXC:

Code:
# update image
apt update
apt dist-upgrade
reboot
# Boot back up and login
# Install upstream Docker, don't use Debian's:
apt install apt-transport-https ca-certificates curl gnupg2 software-properties-common
curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
apt update
apt install  docker-ce docker-ce-cli containerd.io
# Install a docker container, whatever you're using, I happen to want to use tensorflow:
docker pull tensorflow/serving
# Run docker container. Note this will complain you don't have a model if you use tensorflow above, which is ok.
docker run -p 80:80 -t tensorflow/serving
# Enter docker container. Use another window since the other is running above:
# (Note, i'm sure there's a proper way to do this, I don't use docker much):
docker exec -u 0 -it `docker ps|grep -v CONTAINER | head -1 | cut -d " " -f 1` bash
# Install toys inside docker:
apt install apache2 net-tools
echo foo >/var/www/html/index.html
* Then on your workstation and/or the docker hit port 80 of your LXC IP and it should give you apache from inside the docker.

Happy hacking,

-Jeff
Thanks for the setup. Indeed this works just fine, I tried it now.

However, now Docker Swarm doesn't seem to work regardless of what I do to convince it.

The same setup above works perfectly in Docker without swarm, but as soon as I initiate Swarm, with the new interfaces it creates, iptables seems to be go bananas and fails to publish ports. :(

Will investigate further, but no luck so far.

Thanks again!
 

Seed

Member
Oct 18, 2019
74
9
8
120
I personally run both docker swarm and kubernetes and even some portainer local instances across a proxmox cluster using KVM hosts. Pair the KVM provisioning process to automatically create a host on cloning, it makes it very easy and fun to manage. I wouldn't want it any other way really. You can migrate VMs as needed and not interrupt the container clusters and provides more flexibility in terms of using orchestration services. Really the best setup ive had as far as ease of use and maintainability.
 

MadalinC

New Member
Jan 28, 2020
14
0
1
I personally run both docker swarm and kubernetes and even some portainer local instances across a proxmox cluster using KVM hosts. Pair the KVM provisioning process to automatically create a host on cloning, it makes it very easy and fun to manage. I wouldn't want it any other way really. You can migrate VMs as needed and not interrupt the container clusters and provides more flexibility in terms of using orchestration services. Really the best setup ive had as far as ease of use and maintainability.
Indeed, this is what I'm currently running (3xVMs in Swarm mode), however the extra RAM overhead is a bit too much and it would be nice to have lower overall RAM usage with Lxc since this is just for my home lab. For true production purposes I use VMs too. :)
 

Seed

Member
Oct 18, 2019
74
9
8
120
Indeed, this is what I'm currently running (3xVMs in Swarm mode), however the extra RAM overhead is a bit too much and it would be nice to have lower overall RAM usage with Lxc since this is just for my home lab. For true production purposes I use VMs too. :)
There isnt much overhead IMO with KVM and the benefits over what Proxmox could bake in wouldn't match the OS management or more true to form management of containers that the real OS world offers IMO.
 
  • Like
Reactions: MadalinC

Seed

Member
Oct 18, 2019
74
9
8
120
What do you mean by this? Do you have Kubernetes automatically creating KVMs in Proxmox? If so, how?
Not quite that. I have Proxmox automatically creating KVM instances with cloning where all you have to do is run the join cluster components. This makes it easy to provision a group of KVM VMs to form a cluster across the nodes. If you want help doing this I can assist but you'll need some pretty high grade network gear I think, as it's all based on VLANs and DHCP pxeboot processes. It's a lot of effort up front but if you're into using lots of containers with various orchestration services and testing/building/learning it takes this mundane part out of it. You can wipe and rebuild a suite of VMs in 20 minutes or so.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!