Thanks for the setup. Indeed this works just fine, I tried it now.
However, now Docker Swarm doesn't seem to work regardless of what I do to convince it.
The same setup above works perfectly in Docker without swarm, but as soon as I initiate Swarm, with the new interfaces it creates, iptables seems to be go bananas and fails to publish ports.
Will investigate further, but no luck so far.
Thanks again!
# less /etc/docker/daemon.json
{
"storage-driver": "overlay2",
"iptables": true,
"ip-masq": true
}
iptables-save > /var/lib/iptables-docker-default.rules
# less /etc/docker/daemon.json
{
"storage-driver": "overlay2",
"iptables": false,
"ip-masq": true
}
cat /etc/systemd/system/docker.service.d/override.conf
[Service]
ExecStartPost=/bin/sh -c "iptables-restore < /var/lib/iptables-docker-default.rules"
"storage-driver": "overlay2",
On proxmox host is need some modules preloaded:With this options set, docker won't even start. Removed it as I don't think it has any impact on the Iptables rules.
However, I followed the rest of the instructions step by step and it still doesn't work.
It seems like there's some Lxc/Proxmox magic going on that doesn't allow docker to run without specific rules...
Still now luck getting Docker Swarm to work in Lxc. Works just fine in VM, however, so not that big of an issue, just a minor inconvenience as this SHOULD work.
Thanks!
# cat /etc/modules
bonding
ip_vs
ip_vs_dh
ip_vs_ftp
ip_vs_lblc
ip_vs_lblcr
ip_vs_lc
ip_vs_nq
ip_vs_rr
ip_vs_sed
ip_vs_sh
ip_vs_wlc
ip_vs_wrr
xfrm_user
#for Docker
overlay
nf_nat
br_netfilter
xt_conntrack
nf_nat br_netfilter
Perhaps you need enable in config container Options --> Features "keyctl" and "Nesting" or add into config next params:Thanks for the details. I tried them but I cannot seem to get these two load on PVE so that the LXC container sees the and passes to Docker the requirements. Docker still says that the kernel is missing these features.
lxc.apparmor.profile: unconfined
lxc.cgroup.devices.allow: a
lxc.cap.drop:
lxc.mount.auto: proc:rw sys:rw
br_netfilter no need to load on pve kernels, because:I encountered this issue a few times before with br_netfilter and still couldn't find a fix for it. Modinfo on PVE doesn't show anything and modprobe is missing these modules...
$ grep 'BRIDGE_NETFILTER' /boot/config-$(uname -r)
CONFIG_BRIDGE_NETFILTER=y
EDIT: defining a cloud-init template of a VM would be the best option to easily and quickly deploy multiple vms under a swarm or kubernetes cluster.
Or use a driver for docker and proxmox, if the damn programmer has time to finish and clean it up.
What do we do now that RancherOS is dead? k3os? Fedora CoreOS? Ubuntu LXD? ... This landscape is so disappointing and yet we desperately need Docker support on Proxmox.
As I struggled to create a docker machine driver myself, I came up with another idea which I'm using right now to easily bootstrap docker in KVMs: https://gitlab.com/morph027/pve-cloud-init-creator
Probably the concept will work for @LnxBil driver too if he finds some time
What do we do now that RancherOS is dead?
If I understood well, Since Docker joined the CNCF they split the project into a bunch of projects and docker itself, will focus on the daemon part while networking configuration and other parameters will be a relay to orchestrator such as Kubernetes.
definitely Docker democratizes and puts on the map the container technology, but in the end, Docker is a fork of LXC which is inspired by Jails (BSD).
It is basically using Ubuntu instead of Debian; some people would consider that a more useful because the package is more up to date and it is easier with PPA, while others think it is Ubuntu is for noob only.
But one thing is sure, when you try to please everyone you often end pleasing no one. Maybe Proxmox should simply drop LXC, and or make it optional at the installation and/or as service then only focus on KVM. And who ever want containers technology could run his own technology inside a VM.
not sure that that’s accurate,
it is obviously related to what I mentioned before, such as Ubuntu is a fork of Debian.Not sure what you mean
I think LXC support in Proxmox was a wise choice and I’ve heard of people using it in production.
of course, I'm unable to find the original source of my information but, at least I'm not the only one saying it
View attachment 19875
it is obviously related to what I mentioned before, such as Ubuntu is a fork of Debian.
OCI containers are here to stay.
I agree with you on this, while Docker is more popular, in the end LXC is easier since it is almost run as its own machine.
also it is possible to run Docker inside a LXC container: https://discuss.linuxcontainers.org...in-lxc-unprivileged-container-in-proxmox/3828
It really depends on your use case. I use lxc containers for various applications all the time with great success; it cannot replace a VM for all purposes but then thats not the scope of containers... Much of the internet runs microservices on container these daysPossible and good are two things, running Docker in LXC in production is possible, but not good (just look at the forums).
We use essential cookies to make this site work, and optional cookies to enhance your experience.