[SOLVED] Docker in LXC läuft nicht mehr

zarathustra

Member
Dec 9, 2020
3
0
6
33
Hallo,

seit meinem letzten Update des Proxmox Hosts läuft docker nicht mehr in meinen LXCs. In einer VM läuft es weiterhin. Ich hatte den Kernel im Verdacht und deswegen den älteren wieder manuell beim booten aktiviert. Das Problem bleibt aber leider. Es betrifft mehrere LXCs und eine habe ich extra aus einem Backup wiederhergestellt. Aber auch diese Version hat offenbar dasselbe Problem.

Meine Umgebung:
Code:
pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.73-1-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
pve-kernel-5.4: 6.3-3
pve-kernel-helper: 6.3-3
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.73-1-pve: 5.4.73-1
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-2
libpve-guest-common-perl: 3.1-4
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-1
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.6-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-4
pve-cluster: 6.2-1
pve-container: 3.3-2
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-8
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-3
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1


Ausgabe von "journalctl -fu docker":
Code:
Feb 04 20:06:31 npm dockerd[400]: time="2021-02-04T20:06:31.583663093Z" level=warning msg="Enabling IP forwarding failed: open /proc/sys/net/ipv4/ip_forward: read-only file system"
Feb 04 20:06:31 npm dockerd[400]: failed to start daemon: Error initializing network controller: error obtaining controller instance: Enabling IP forwarding failed: open /proc/sys/net/ipv4/ip_forward: read-only file system
Feb 04 20:06:31 npm systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Feb 04 20:06:31 npm systemd[1]: docker.service: Failed with result 'exit-code'.
Feb 04 20:06:31 npm systemd[1]: Failed to start Docker Application Container Engine.
Feb 04 20:06:33 npm systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Feb 04 20:06:33 npm systemd[1]: Stopped Docker Application Container Engine.
Feb 04 20:06:33 npm systemd[1]: docker.service: Start request repeated too quickly.
Feb 04 20:06:33 npm systemd[1]: docker.service: Failed with result 'exit-code'.
Feb 04 20:06:33 npm systemd[1]: Failed to start Docker Application Container Engine.

Der LXC läuft mit Ubuntu 20.04.2 und allen Upgrades per apt update und apt upgrade.

Jemand eine Idee, woran es liegt? Alles lief super bis zum letzten Proxmox-Update und einem Restart danach. "/" ist übrigens nicht read only. zB in meinem home dir kann ich ohne weiteres schreiben.
 
Last edited:
Da docker nur wegen forwarding meckert, versuch es mal auf dem host zu aktivieren.

Code:
sysctl -w net.ipv4.ip_forward=1

Wenn der container danach startet den Eintrag zu /etc/sysctl.conf hinzufügen.

Code:
cat << 'EOF' >> /etc/sysctl.conf
net.ipv4.ip_forward=1
EOF

Hab bei mir nur docker in debian lxc und die laufen alle weiterhin.


Edit: Kommt durch upgrade von lxc-pve:amd64 (4.0.3-1, 4.0.6-1)

Zum fixen:


Code:
apt install lxc-pve:amd64=4.0.3-1
 
Last edited:
Vielen Dank!

Habe den Befehl im Proxmox-Host ausgeführt und das hat das Problem behoben. In der /etc/sysctl.conf hab ich die entsprechende auskommentierte Zeile "ent-kommentiert". Auch nach einem kompletten Neustart des Proxmox-Hosts klappt alles wieder.

Ich weiß zwar immer noch nicht, warum dieses Problem plötzlich auftrat. Aber der Workaround hilft mir erstmal weiter.
 
Da docker nur wegen forwarding meckert, versuch es mal auf dem host zu aktivieren.

Code:
sysctl -w net.ipv4.ip_forward=1

Wenn der container danach startet den Eintrag zu /etc/sysctl.conf hinzufügen.

Hab bei mir nur docker in debian lxc und die laufen alle weiterhin.
Auch bei mir liefen die docker debian lxc container nicht mehr. Das hat nichts mit ubuntu oder debian im lxc zu tun.
Mit dem workaround "sysctl -w net.ipv4.ip_forward=1" gehts wieder.

Was ist denn da nach dem Proxmox Update passiert?
 
I've got the same issue after a new install of 6.3-3 but my 6.3-2 instance still works just fine. here my code output

Code:
 root @ docker: ~ # journalctl -fu docker
- Logs begin at Sat 2021-02-06 03:46:19 UTC. -
Feb 06 16:55:29 docker systemd [1]: Failed to start Docker Application Container Engine.
Feb 06 16:55:31 docker systemd [1]: docker.service: Service hold-off time over, scheduling restart.
Feb 06 16:55:31 docker systemd [1]: docker.service: Scheduled restart job, restart counter is at 3.
Feb 06 16:55:31 docker systemd [1]: Stopped Docker Application Container Engine.
Feb 06 16:55:31 docker systemd [1]: docker.service: Start request repeated too quickly.
Feb 06 16:55:31 docker systemd [1]: docker.service: Failed with result 'exit-code'.
Feb 06 16:55:31 docker systemd [1]: Failed to start Docker Application Container Engine.
Feb 06 16:56:01 docker systemd [1]: docker.service: Start request repeated too quickly.
Feb 06 16:56:01 docker systemd [1]: docker.service: Failed with result 'exit-code'.
Feb 06 16:56:01 docker systemd [1]: Failed to start Docker Application Container Engine [/ CODE]
 
Updated to PVE 6.3-3 from 6.3-2, docker does not work anymore.

Linux Docker 5.4.78-2-pve #1 SMP PVE 5.4.78-2 (Thu, 03 Dec 2020 14:26:17 +0100) x86_64 GNU/Linux


dockerd -D output
Code:
WARN[2021-02-06T17:02:19.517284672Z] Error while setting daemon root propagation, this is not generally critical but may cause some functionality to not work or fallback to less desirable behavior  dir=/var/lib/docker error="error writing file to signal mount cleanup on shutdown: open /var/run/docker/unmount-on-shutdown: no such file or directory"
DEBU[2021-02-06T17:02:19.517524980Z] Listener created for HTTP on unix (/var/run/docker.sock)
DEBU[2021-02-06T17:02:19.661262703Z] [graphdriver] priority list: [btrfs zfs overlay2 aufs overlay devicemapper vfs]
DEBU[2021-02-06T17:02:19.661338916Z] zfs command is not available: exec: "zfs": executable file not found in $PATH  storage-driver=zfs
ERRO[2021-02-06T17:02:19.661998906Z] 'overlay' not found as a supported filesystem on this host. Please ensure kernel is new enough and has overlay support loaded.  storage-driver=overlay2
ERRO[2021-02-06T17:02:19.662595290Z] AUFS was not found in /proc/filesystems       storage-driver=aufs
ERRO[2021-02-06T17:02:19.666007122Z] 'overlay' not found as a supported filesystem on this host. Please ensure kernel is new enough and has overlay support loaded.  storage-driver=overlay
DEBU[2021-02-06T17:02:19.666502572Z] Initialized graph driver vfs                
INFO[2021-02-06T17:02:19.673167739Z] Graph migration to content-addressability took 0.00 seconds
WARN[2021-02-06T17:02:19.673480166Z] Your kernel does not support cgroup rt period
WARN[2021-02-06T17:02:19.673502460Z] Your kernel does not support cgroup rt runtime
WARN[2021-02-06T17:02:19.673519252Z] Your kernel does not support cgroup blkio weight
WARN[2021-02-06T17:02:19.673532599Z] Your kernel does not support cgroup blkio weight_device
DEBU[2021-02-06T17:02:19.674760581Z] Option Experimental: false                  
DEBU[2021-02-06T17:02:19.674795194Z] Option DefaultDriver: bridge                
DEBU[2021-02-06T17:02:19.674810723Z] Option DefaultNetwork: bridge              
DEBU[2021-02-06T17:02:19.674826212Z] Network Control Plane MTU: 1500            
DEBU[2021-02-06T17:02:19.676037560Z] Fail to initialize firewalld: Failed to connect to D-Bus system bus: dial unix /var/run/dbus/system_bus_socket: connect: no such file or directory, using raw iptables instead
DEBU[2021-02-06T17:02:19.682659921Z] /usr/sbin/iptables, [--wait -t filter -C FORWARD -j DOCKER-ISOLATION]
DEBU[2021-02-06T17:02:19.683970874Z] /usr/sbin/iptables, [--wait -t nat -D PREROUTING -m addrtype --dst-type LOCAL -j DOCKER]
DEBU[2021-02-06T17:02:19.685446689Z] /usr/sbin/iptables, [--wait -t nat -D OUTPUT -m addrtype --dst-type LOCAL ! --dst 127.0.0.0/8 -j DOCKER]
DEBU[2021-02-06T17:02:19.686758921Z] /usr/sbin/iptables, [--wait -t nat -D OUTPUT -m addrtype --dst-type LOCAL -j DOCKER]
DEBU[2021-02-06T17:02:19.688062871Z] /usr/sbin/iptables, [--wait -t nat -D PREROUTING]
DEBU[2021-02-06T17:02:19.689140551Z] /usr/sbin/iptables, [--wait -t nat -D OUTPUT]
DEBU[2021-02-06T17:02:19.690289646Z] /usr/sbin/iptables, [--wait -t nat -F DOCKER]
DEBU[2021-02-06T17:02:19.691497736Z] /usr/sbin/iptables, [--wait -t nat -X DOCKER]
DEBU[2021-02-06T17:02:19.692573093Z] /usr/sbin/iptables, [--wait -t filter -F DOCKER]
DEBU[2021-02-06T17:02:19.693740127Z] /usr/sbin/iptables, [--wait -t filter -X DOCKER]
DEBU[2021-02-06T17:02:19.694831175Z] /usr/sbin/iptables, [--wait -t filter -F DOCKER-ISOLATION-STAGE-1]
DEBU[2021-02-06T17:02:19.695977400Z] /usr/sbin/iptables, [--wait -t filter -X DOCKER-ISOLATION-STAGE-1]
DEBU[2021-02-06T17:02:19.697048537Z] /usr/sbin/iptables, [--wait -t filter -F DOCKER-ISOLATION-STAGE-2]
DEBU[2021-02-06T17:02:19.698363176Z] /usr/sbin/iptables, [--wait -t filter -X DOCKER-ISOLATION-STAGE-2]
DEBU[2021-02-06T17:02:19.699550914Z] /usr/sbin/iptables, [--wait -t filter -F DOCKER-ISOLATION]
DEBU[2021-02-06T17:02:19.700673354Z] /usr/sbin/iptables, [--wait -t filter -X DOCKER-ISOLATION]
DEBU[2021-02-06T17:02:19.701836970Z] /usr/sbin/iptables, [--wait -t nat -n -L DOCKER]
DEBU[2021-02-06T17:02:19.703046499Z] /usr/sbin/iptables, [--wait -t nat -N DOCKER]
DEBU[2021-02-06T17:02:19.704123592Z] /usr/sbin/iptables, [--wait -t filter -n -L DOCKER]
DEBU[2021-02-06T17:02:19.705229681Z] /usr/sbin/iptables, [--wait -t filter -N DOCKER]
DEBU[2021-02-06T17:02:19.706429239Z] /usr/sbin/iptables, [--wait -t filter -n -L DOCKER-ISOLATION-STAGE-1]
DEBU[2021-02-06T17:02:19.707603975Z] /usr/sbin/iptables, [--wait -t filter -N DOCKER-ISOLATION-STAGE-1]
DEBU[2021-02-06T17:02:19.708809440Z] /usr/sbin/iptables, [--wait -t filter -n -L DOCKER-ISOLATION-STAGE-2]
DEBU[2021-02-06T17:02:19.710058761Z] /usr/sbin/iptables, [--wait -t filter -N DOCKER-ISOLATION-STAGE-2]
DEBU[2021-02-06T17:02:19.711179101Z] /usr/sbin/iptables, [--wait -t filter -C DOCKER-ISOLATION-STAGE-1 -j RETURN]
DEBU[2021-02-06T17:02:19.712634930Z] /usr/sbin/iptables, [--wait -A DOCKER-ISOLATION-STAGE-1 -j RETURN]
DEBU[2021-02-06T17:02:19.713923600Z] /usr/sbin/iptables, [--wait -t filter -C DOCKER-ISOLATION-STAGE-2 -j RETURN]
DEBU[2021-02-06T17:02:19.715070936Z] /usr/sbin/iptables, [--wait -A DOCKER-ISOLATION-STAGE-2 -j RETURN]
WARN[2021-02-06T17:02:19.716417831Z] Enabling IP forwarding failed: open /proc/sys/net/ipv4/ip_forward: read-only file system
DEBU[2021-02-06T17:02:19.716436244Z] daemon configured with a 15 seconds minimum shutdown timeout
DEBU[2021-02-06T17:02:19.716461496Z] start clean shutdown of all containers with a 15 seconds timeout...
DEBU[2021-02-06T17:02:19.716722339Z] Cleaning up old mountid : start.            
INFO[2021-02-06T17:02:19.716762829Z] stopping event stream following graceful shutdown  error="<nil>" module=libcontainerd namespace=moby
DEBU[2021-02-06T17:02:19.716982633Z] Cleaning up old mountid : done.            
INFO[2021-02-06T17:02:19.717216143Z] stopping event stream following graceful shutdown  error="context canceled" module=libcontainerd namespace=plugins.moby
INFO[2021-02-06T17:02:19.717217003Z] stopping healthcheck following graceful shutdown  module=libcontainerd
DEBU[2021-02-06T17:02:19.717351187Z] received signal                               signal=terminated
INFO[2021-02-06T17:02:19.717579449Z] pickfirstBalancer: HandleSubConnStateChange: 0xc000163620, TRANSIENT_FAILURE  module=grpc
INFO[2021-02-06T17:02:19.717612311Z] pickfirstBalancer: HandleSubConnStateChange: 0xc000163620, CONNECTING  module=grpc
Error starting daemon: Error initializing network controller: error obtaining controller instance: Enabling IP forwarding failed: open /proc/sys/net/ipv4/ip_forward: read-only file system


pveversion -v output
Code:
proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
pve-kernel-5.4: 6.3-3
pve-kernel-helper: 6.3-3
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-2
libpve-guest-common-perl: 3.1-4
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-1
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.6-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-4
pve-cluster: 6.2-1
pve-container: 3.3-2
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-8
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-3
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1
 
heres some more output from testing ... tried to remove daemon.json and restart with dockerd -D

Code:
 U [2021-02-06T17: 33: 28.008998835Z] [graphdriver] priority list: [btrfs zfs overlay2 fuse-overlayfs on overlay devicemapper vfs]
DEBU [2021-02-06T17: 33: 28.009180897Z] zfs command is not available: exec: "zfs": executable file not found in $ PATH storage-driver = zfs
DEBU [2021-02-06T17: 33: 28.009415534Z] processing event stream module = libcontainerd namespace = plugins.moby
ERRO [2021-02-06T17: 33: 28.015282093Z] failed to mount overlay: invalid argument storage-driver = overlay2
ERRO [2021-02-06T17: 33: 28.015413809Z] exec: "fuse-overlayfs": executable file not found in $ PATH storage-driver = fuse-overlayfs
ERRO [2021-02-06T17: 33: 28.019433529Z] AUFS cannot be used in non-init user namespace storage-driver = auf
ERRO [2021-02-06T17: 33: 28.131088158Z] failed to mount overlay: invalid argument storage-driver = overlay
DEBU [2021-02-06T17: 33: 28.131596090Z] Initialized graph driver vfs                 
DEBU [2021-02-06T17: 33: 28.131926725Z] No quota support for local volumes in / var / lib / docker / volumes: Filesystem does not support, or has not enabled quotas
WARN [2021-02-06T17: 33: 28.136222571Z] Your kernel does not support CPU realtime scheduler
WARN [2021-02-06T17: 33: 28.136251286Z] Your kernel does not support cgroup blkio weight
WARN [2021-02-06T17: 33: 28.136267312Z] Your kernel does not support cgroup blkio weight_device
DEBU [2021-02-06T17: 33: 28.137488670Z] Max Concurrent Downloads: 3                 
DEBU [2021-02-06T17: 33: 28.137516708Z] Max Concurrent Uploads: 5                   
DEBU [2021-02-06T17: 33: 28.137537506Z] Max Download Attempts: 5                     
INFO [2021-02-06T17: 33: 28.137585513Z] Loading containers: start. [/ CODE]
 
containers running on
# pveversion pve-manager/6.3-3/eee5f901 (running kernel: 5.4.78-2-pve)
doesn't have:
# mount | grep proc/sys/net proc on /proc/sys/net type proc (rw,nosuid,nodev,noexec,relatime)

this mount point i found in container on Proxmox 5.4

Any idea how to create this mountpoint in Proxmox 6.3-3?
 
  • Like
Reactions: Julen
Ich habe das gemacht wie beschrieben und meine Docker gehen wieder, nur mein PiHole geht immer noch nicht?
Weiß jemand woran das noch liegen kann?
 
Mein Pi-Hole im Docker auf dem unprivileged LXC läuft noch mit dem aktuellen Proxmox. Habe allerdings kein PVE installiert, sondern ein Debian und da drauf dann die PVE Packages.
 
Probleme gibt es auch mit Wireguard im LXC seit dem Update. Schade das kein dev hier etwas dazu schreibt :-(
 
I was having the same issue, and it got solved by adding the lxc.mount.auto option.
My container configuration at /etc/pve/lxc/123.conf is the default (unconfined) plus these four lines at the bottom:

Code:
lxc.apparmor.profile: unconfined
lxc.cgroup.devices.allow: a
lxc.cap.drop:
lxc.mount.auto:

I can confirm that it works in my setup:
- Hypervisor: PVE version 6.3-3, kernel 5.4.78-2-pve
- Guest: Debian 10 (LXC), Docker version 20.10.3
 
I was having the same issue, and it got solved by adding the lxc.mount.auto option.
My container configuration at /etc/pve/lxc/123.conf is the default (unconfined) plus these four lines at the bottom:

Code:
lxc.apparmor.profile: unconfined
lxc.cgroup.devices.allow: a
lxc.cap.drop:
lxc.mount.auto:

I can confirm that it works in my setup:
- Hypervisor: PVE version 6.3-3, kernel 5.4.78-2-pve
- Guest: Debian 10 (LXC), Docker version 20.10.3
I have still a problem with PiHole without Docker.
 
Ok hab gerade geupgraded und auch die Fehler.

Kommt durch lxc-pve 4.0.6-1.

Downgraden hilft:
Code:
apt install lxc-pve:amd64=4.0.3-1
 
Komisch, bei mir läuft noch alles mit "lxc-pve: 4.0.6-1" auch nach reboot. Soweit keine Probleme mit meinen beiden Docker-LXCs.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!