How to start a Docker Image within Proxmox ?

ikus060

Member
Nov 18, 2021
24
10
8
39
Hello,


Using Proxmox VE for quite a while with VM and LXC on 5 physical servers.


Too many times, we find ourselves creating a VM or an LXC container for a single Docker application. Mainly, we are creating an additional Linux server for the sole purpose of installing docker within it and starting our Docker image.


This process is starting to cost us in maintenance. About 50% of the VM/LXC are placeholders for Docker ! The reason why it's costing us is maintaining those servers up to date with patches and other stuff. Again, simply to run a single docker image.


As a user, I would love to get a more bearable approach where we could simply run the docker image directly within Proxmox. Without the need of installing an additional OS.

Question:

Is it possible to start a Docker image within LXC without the need for an additional OS ?

Maybe using podman or LXC in an exotic way ?

Maybe by converting the docker image into a LXC compatible templates ?

What is the solutions available ?
 
Docker is not supported and AFAIK there are no plans to support it on Proxmox hosts.

One option would be to set up a few VM's and deploy all your dockers in them using an orchestrator like Rancher or Kubernetes.
 
Hello,


Using Proxmox VE for quite a while with VM and LXC on 5 physical servers.


Too many times, we find ourselves creating a VM or an LXC container for a single Docker application. Mainly, we are creating an additional Linux server for the sole purpose of installing docker within it and starting our Docker image.


This process is starting to cost us in maintenance. About 50% of the VM/LXC are placeholders for Docker ! The reason why it's costing us is maintaining those servers up to date with patches and other stuff. Again, simply to run a single docker image.


As a user, I would love to get a more bearable approach where we could simply run the docker image directly within Proxmox. Without the need of installing an additional OS.

Question:

Is it possible to start a Docker image within LXC without the need for an additional OS ?

Maybe using podman or LXC in an exotic way ?

Maybe by converting the docker image into a LXC compatible templates ?

What is the solutions available ?
Hi there.
Fortunately there exists a production ready alternative to the Docker problem.
It is called Podman.

https://podman.io/

You can run 'Docker' containers with minimal-to-no changes.
When you get used to it, it is a far superior platform and technology suite to 'Docker' TM
and you will likely find it more convenient to abandon the 'Docker' compatibilty almost entirely.

Don't waste your time with obsolete builds from distros,

There are ready-to-go builds available here:

https://build.opensuse.org/package/show/devel:kubic:libcontainers:stable/podman
https://build.opensuse.org/repositories/devel:kubic:libcontainers:stable/podman

hope that Helps!
;-)
 
Hi @VictorSTS

Thanks for your reply. I know Proxmox is not supporting Docker deamon. And it's not really what I'm looking for either. I'm looking for a way to start a docker image. Similar to LXC where we start a full OS, I'm looking for a way to start a single application within LXC.

Again, the goal is to reduce the number of OS to manage. Creating more VMs to install Rancher or k8s goes in opposite direction of what we are looking for.

I know Proxmox team is reticent to support anything related to Docker. I'm not sure to understand the rational behind it. Containers are here to stay. Various ways of spinning them exist. LXC is one, Docker deamon and podman are another.


The best would be some kind of wrapper for LXC to start a single Docker image with podman. That would be fantastic.
 
Not a docker user yet, unfortunately... But as each dockerized app runs isolated from the rest of apps, why not lots of dockers in the same VM and orchestrate all dockers in a few VMs which act as docker servers? Even if apps are unrelated that seems possible.

Yes, you would need a few extra VM's for the orchestrator, but then you could move from 1VM/LXC==1 dockerized app to 1VM==n dockerized apps.

I'm probably missing something here as my docker experience nears zero, so bear with me please :)
 
@VictorSTS

> Yes, you would need a few extra VM's for the orchestrator, but then you could move from 1VM/LXC==1 dockerized app to 1VM==n dockerized apps.

We started that way, and soon realize the number of Dockerized App is growing and so are the VMs to host them. This causes issues in term of flexibility. Replication, migration, backup and networking are all defined at VM level. e.g.: When migrating the VM, we need to migrate everything at once. It's far from ideal.
 
Hi there.
Fortunately there exists a production ready alternative to the Docker problem.
It is called Podman.

https://podman.io/

You can run 'Docker' containers with minimal-to-no changes.
When you get used to it, it is a far superior platform and technology suite to 'Docker' TM
and you will likely find it more convenient to abandon the 'Docker' compatibilty almost entirely.

Don't waste your time with obsolete builds from distros,

There are ready-to-go builds available here:

https://build.opensuse.org/package/show/devel:kubic:libcontainers:stable/podman
https://build.opensuse.org/repositories/devel:kubic:libcontainers:stable/podman

hope that Helps!
;-)
@auser

Thanks for your reply. I've use podman a bit, but I'm not a power user.

Do you know a way to integrated with LXC ? Maybe I could create a LXC template with only podman ?
 
Hi @ikus060
So I made the choice to move away from using LXC for hosting containerised workloads.
I did consider LXD at length, but it is not integrated into PVE and does not 'just work' with the large and growing variety of 'dockerized' workloads.

For me, the advantages from managing your containerised workload _within_ a VM are various, but compelling.

Isolation is better in a VM and the interaction of network <-> VM can be managed transparently (as perceived by the container manager as well as the containerised application)

Similarly, I would say this to you, it _is_ a security improvement to move away from the Docker Daemon to a unix-style 'process running in a namespace'

Architecturally, this is similar to the appreciation that the KVM Hypervisor implementation integrated into the linux kernel, where each VM is running as a native unix Process that you can interact with in the usual way,
is fundamentally superior to the XEN model where each VM is its own island and the 'Host Kernel' is entirely divorced.

It sounds from your description that what mostly concerns you is ease of Administration.
One of the great advantages of moving to Podman is the flexibility that it brings.
The networking uses the open source CNI and the runtime can be selected as either runc or crun.
It is a great stepping stone platform if you may wish to migrate workloads to k8s or openshift in future.

You can use the systems that you are already familiar with for Process Management.

You can write systemd unit files to manage the lifecycle of your services
and Podman with systemd can handle the cgroups2 resource management largely transparently for you (if you want).
Of course you could administer everything with ansible if you want! ;-)

Hope that helps!
;-)
 
What about using LXC to run a OCI image (namely a Docker image).

According to this article LXC is support OCI.

https://www.buzzwrd.me/index.php/2021/03/10/creating-lxc-containers-from-docker-and-oci-images/

> Like every container technology and their dog, LXC nowadays supports OCI images. The Open Container Initiative was created to manage open-sourced parts of Docker. They basically donated image specification, the low lever container runtime, and a few other key system components. OCI images are nowadays the defacto way of handling image Compatability between container runtime platforms.

Is it possible to get is started from proxmox ?
 
  • Like
Reactions: egberts
I tried the command line `lxc-create name -t oci -- --url docker://alpine:latest` but it complains about missing `skopeo` which is only available on debian bullseye.
 
Can anyone confirm if the lxc-create -t oci command imports the container to Proxmox? As far as I know, proxmox doesn't use the lxc cli, it has it's own container framework based on the lxc feature in the linux kernel rather than the existing userspace tools.
 
Tested it myself, the lxc-create $name -t oci -- --url $docker_url command doesn't import the container to proxmox, but does create a rootfs at /var/lib/lxc/$name/rootfs. You need to install skopeo umoci jq for that to work, which are all available from the standard debian repos. From there, you can create the lxc template in proxmox's default ct template directory with tar -cvzf /var/lib/vz/template/cache/$name.tar.gz /var/lib/lxc/$name/rootfs/*. Tested with docker://debian:latest and docker://sugoidogo/expanse, which both failed to start.
Bash:
root@proxmox:~# pct create 1000 /var/lib/vz/template/cache/expanse.tar.gz
extracting archive '/var/lib/vz/template/cache/expanse.tar.gz'
Total bytes read: 1176627200 (1.1GiB, 49MiB/s)
Architecture detection failed: open '/bin/sh' failed: No such file or directory

Falling back to amd64.
Use `pct set VMID --arch ARCH` to change.
/etc/os-release file not found and autodetection failed, falling back to 'unmanaged'
root@proxmox:~# pct start 1000
sync_wait: 34 An error occurred in another process (expected sequence number 7)
__lxc_start: 2074 Failed to spawn container "107"
TASK ERROR: startup for container '107' failed
I don't know enough about any of this to know why it failed, but maybe someone else could shed some light on this.
 
Since I have not been using lxc since the post above - almost a year ago, I can't really help you with that.

I would ask you though to consider what is the reason _why_ you wish to experiment like this on your proxmox host node?

The multiple major advantages of running your container management in a VM are compelling.

If you want to manage lxc with lxd it will be simple to test e.g. using an Ubuntu Cloud Image Template.

If you want to manage linux containers with Podman4 test it on Rocky or Alma.

If you want to manage your OCI containers with Version Control rather than command line
you can use compose files with Docker Compose or Podman Compose

If you wish to automate OCI containers via API then your VM environment can provide that too:

https://www.redhat.com/sysadmin/podman-compose-docker-compose

Hope that is of some help.
:)
 
Last edited:
I prefer lxc and have to deal with software whos developers only publish docker containers. I wasn't asking you specifically for help, and none of the scenarios you describe apply.
 
  • Like
Reactions: marcosscriven
I didn't go through the process of creating a container template. Instead:
Bash:
echo "create a proxmox managed CT volume"
pvesm alloc ${PVESM_STORAGE:local} ${CTID} vm-${CTID}-disk-0 1G

echo "Format it"
mkfs.ext4 -E root_owner="100000:100000" /dev/pve/vm-${CTID}-disk-0

echo "Copy the rootfs content into it"
mkdir /mnt/$CTID-rootfs && mount /dev/pve/vm-${CTID}-disk-0 /mnt/$CTID-rootfs && cp -rp /var/lib/lxc/${CTID}/rootfs/* /mnt/$CTID-rootfs;

echo "Unpriviliged container ; so remap like a barbarian"
chown -R 100000:100000 /mnt/$CTID-rootfs/*;

echo "create an 'entry' for proxmox"
echo "Yup, the container will appear in the WebUI"
cat <<EOT > /etc/pve/lxc/$CTID.conf
#Service imaginary
arch: amd64
cores: 2
cpulimit: 2
features: nesting=1
hostname: imaginary
memory: 1024
net0: name=vlan10,bridge=vmbr1,hwaddr=02:01:01:00:10:21,ip=10.10.2.21/24,tag=10,type=veth
onboot: 1
ostype: debian
rootfs: vm-disks:vm-121-disk-0,size=1G
searchdomain: home.lan
swap: 512
unprivileged: 1
EOT

At this point, the container starts but it has no network connectivity and the container init is /sbin/init rather than the entrypoint defined in the Dockerfile. So I created a little script that setup the network interfaces and runs what was intended by the Dockerfile, next I changed the entrypoint in the proxmox CT configuration:
Bash:
pct enter $CTID
cat <<EOT > /usr/local/bin/entrypoint.sh
#!/bin/sh
ip addr add 10.10.2.21/24 dev vlan10;
ip link set vlan10 up;
/usr/local/bin/imaginary -return-size -max-allowed-resolution 222.2;
EOT
chown 100000:100000 /usr/local/bin/entrypoint.sh;
chmod a+x /usr/local/bin/entrypoint.sh;
quit;

echo "Reconfigure the CT entrypoint"
echo "lxc.init.cmd: /usr/local/bin/entrypoint.sh" | tee --append /etc/pve/lxc/$CTID.conf > /dev/null
 
Last edited:
mount /dev/pve/vm-${CTID}-disk-0 /mnt
In default PVE, the /mnt is a base of all other mounts and you can get problems if you mount something via the PVE GUI. Better to use a subfolder like /mnt/mystrangeselfcontainercreationtemp.

echo "Unpriviliged container ; so remap like a barbarian"
chown -R 100000:100000 /mnt/*;
You're not remapping, you're forcing everythig to be root. I don't know any system in which this would be true. There are so many default users and have proper ownership defined and you're changing it all.
 
In default PVE, the /mnt is a base of all other mounts and you can get problems if you mount something via the PVE GUI.
Right. Fixed.

You're not remapping, you're forcing everythig to be root. I don't know any system in which this would be true. There are so many default users and have proper ownership defined and you're changing it all.
This is quite true in a docker container. But you can still do things properly if you feel like it.
Bash:
#!/bin/bash

# Not tested
function update_uid_gid() {
  local root_uid=$1
  local root_gid=$2
  local rootfs_path=$3
  find "${rootfs_path}" -print0 | while read -d $'\0' path
  do
    if [[ "${path}" == ${rootfs_path}+ ]]; then
      uid=$(stat -c %u "${path}")
      gid=$(stat -c %g "${path}")
      if [ "${uid}" -le "${root_uid}"  ]; then
        chown -h $((${uid}+${root_uid})) "${path}"
        echo "Changed UID from '${uid}' to '$((${uid}+${root_uid}))' for file '${path}'"
      fi
      if [ "${gid}" -le "${root_gid}"  ]; then
        chown -h :$((${gid}+${root_gid})) "${path}"
        echo "Changed GID from '${gid}' to '$((${gid}+${root_gid}))' for file '${path}'"
      fi
    else
      echo "Bad path: '${path}'" 1>&2
    fi
  done
}
Credits
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!