Proxmox 7 + Ubuntu LXC + Docker - Error ALWAYS

EdzioEd

New Member
Oct 22, 2021
2
0
1
33
So, ive been following 4-5 different tutorials, either debian or ubuntu, and the end result is the same

Code:
Process: 9210 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE)


I dont know what else to do
Here's my journalctl main error
Code:
Oct 22 14:08:46 docker systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Oct 22 14:08:46 docker systemd[1]: docker.service: Failed with result 'exit-code'.
Oct 22 14:08:46 docker systemd[1]: Failed to start Docker Application Container Engine.
Oct 22 14:08:49 docker systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
Oct 22 14:08:49 docker systemd[1]: Stopped Docker Application Container Engine.
Oct 22 14:08:49 docker systemd[1]: Starting Docker Application Container Engine...
Oct 22 14:08:49 docker dockerd[9277]: time="2021-10-22T14:08:49.147280440Z" level=info msg="Starting up"
Oct 22 14:08:49 docker dockerd[9277]: time="2021-10-22T14:08:49.148512330Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Oct 22 14:08:49 docker dockerd[9277]: time="2021-10-22T14:08:49.148531835Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Oct 22 14:08:49 docker dockerd[9277]: time="2021-10-22T14:08:49.148550053Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Oct 22 14:08:49 docker dockerd[9277]: time="2021-10-22T14:08:49.148563589Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Oct 22 14:08:49 docker dockerd[9277]: time="2021-10-22T14:08:49.149604568Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Oct 22 14:08:49 docker dockerd[9277]: time="2021-10-22T14:08:49.149627433Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Oct 22 14:08:49 docker dockerd[9277]: time="2021-10-22T14:08:49.149644287Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Oct 22 14:08:49 docker dockerd[9277]: time="2021-10-22T14:08:49.149653754Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Oct 22 14:08:49 docker dockerd[9277]: time="2021-10-22T14:08:49.210134547Z" level=error msg="failed to mount overlay: no such device" storage-driver=overlay2
Oct 22 14:08:49 docker dockerd[9277]: time="2021-10-22T14:08:49.210194448Z" level=error msg="exec: \"fuse-overlayfs\": executable file not found in $PATH" storage-driver=fuse-overlayfs
Oct 22 14:08:49 docker dockerd[9277]: time="2021-10-22T14:08:49.211651626Z" level=error msg="AUFS cannot be used in non-init user namespace" storage-driver=aufs
Oct 22 14:08:49 docker dockerd[9277]: time="2021-10-22T14:08:49.285881461Z" level=error msg="failed to mount overlay: no such device" storage-driver=overlay
Oct 22 14:08:49 docker dockerd[9277]: time="2021-10-22T14:08:49.288060372Z" level=info msg="Loading containers: start."
Oct 22 14:08:49 docker dockerd[9277]: time="2021-10-22T14:08:49.365780778Z" level=warning msg="Running iptables --wait -t nat -L -n failed with message: `iptables v1.8.5 (nf_tables): Could not fetch rule set generation id: Invalid argument`, error: exit status 4"
Oct 22 14:08:51 docker dockerd[9277]: time="2021-10-22T14:08:51.042641924Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Oct 22 14:08:51 docker dockerd[9277]: time="2021-10-22T14:08:51.042924768Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Oct 22 14:08:51 docker dockerd[9277]: failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables -t nat -N DOCKER: iptables v1.8.5 (nf_tables): Could not fetch rule set generation id: Invalid argument
Oct 22 14:08:51 docker dockerd[9277]:  (exit status 4)


and heres my systemctl
Code:
docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2021-10-22 14:08:53 UTC; 1min 0s ago
TriggeredBy: * docker.socket
Docs: https://docs.docker.com
Process: 9277 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE)
Main PID: 9277 (code=exited, status=1/FAILURE)
CPU: 110ms

Oct 22 14:08:53 docker systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Oct 22 14:08:53 docker systemd[1]: Stopped Docker Application Container Engine.
Oct 22 14:08:53 docker systemd[1]: docker.service: Start request repeated too quickly.
Oct 22 14:08:53 docker systemd[1]: docker.service: Failed with result 'exit-code'.
Oct 22 14:08:53 docker systemd[1]: Failed to start Docker Application Container Engine.

The container is a basic container, this one with a ubuntu 20.10 template. any ideas? thanks
 
1. shutdown your LXC container

2. on host uncomment the following line in "/etc/sysctl.conf" file
#net.ipv4.ip_forward=1 -> net.ipv4.ip_forward=1

3. reload sysctl
sysctl --system

4. start your LXC container
 
Last edited:
  • Like
Reactions: EdzioEd
1. shutdown your LXC container

2. on host uncomment the following line in "/etc/sysctl.conf" file
#net.ipv4.ip_forward=1 -> net.ipv4.ip_forward=1

3. reload sysctl
sysctl --system

4. start your LXC container
Tried this, the error remains, but now the journalctl is different on the last parts.


Code:
-- The job identifier is 543.
Oct 22 20:52:23 Docker dockerd[1236]: time="2021-10-22T20:52:23.078931634Z" level=info msg="Starting up"
Oct 22 20:52:23 Docker dockerd[1236]: time="2021-10-22T20:52:23.080242758Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Oct 22 20:52:23 Docker dockerd[1236]: time="2021-10-22T20:52:23.080266387Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Oct 22 20:52:23 Docker dockerd[1236]: time="2021-10-22T20:52:23.080290090Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Oct 22 20:52:23 Docker dockerd[1236]: time="2021-10-22T20:52:23.080299393Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Oct 22 20:52:23 Docker dockerd[1236]: time="2021-10-22T20:52:23.081284428Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Oct 22 20:52:23 Docker dockerd[1236]: time="2021-10-22T20:52:23.081304361Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Oct 22 20:52:23 Docker dockerd[1236]: time="2021-10-22T20:52:23.081328883Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Oct 22 20:52:23 Docker dockerd[1236]: time="2021-10-22T20:52:23.081345013Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Oct 22 20:52:23 Docker dockerd[1236]: time="2021-10-22T20:52:23.165796998Z" level=error msg="failed to mount overlay: no such device" storage-driver=overlay2
Oct 22 20:52:23 Docker dockerd[1236]: time="2021-10-22T20:52:23.165875350Z" level=error msg="exec: \"fuse-overlayfs\": executable file not found in $PATH" storage-driver=fuse-overlayfs
Oct 22 20:52:23 Docker dockerd[1236]: time="2021-10-22T20:52:23.166597214Z" level=error msg="AUFS cannot be used in non-init user namespace" storage-driver=aufs
Oct 22 20:52:23 Docker dockerd[1236]: time="2021-10-22T20:52:23.322275165Z" level=error msg="failed to mount overlay: no such device" storage-driver=overlay
Oct 22 20:52:23 Docker dockerd[1236]: time="2021-10-22T20:52:23.325042495Z" level=info msg="Loading containers: start."
Oct 22 20:52:23 Docker dockerd[1236]: time="2021-10-22T20:52:23.469758380Z" level=warning msg="Running iptables --wait -t nat -L -n failed with message: `iptables v1.8.2 (legacy): can't initialize iptables table `nat': Table does not exist (do you need to insmod?)\nPerhaps iptables or your kernel needs to be upgraded.`, error: exit status 3"
Oct 22 20:52:24 Docker dockerd[1236]: time="2021-10-22T20:52:24.813985478Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Oct 22 20:52:24 Docker dockerd[1236]: failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables --wait -t nat -N DOCKER: iptables v1.8.2 (legacy): can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
Oct 22 20:52:24 Docker dockerd[1236]: Perhaps iptables or your kernel needs to be upgraded.
Oct 22 20:52:24 Docker dockerd[1236]:  (exit status 3)
Oct 22 20:52:24 Docker systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
 
try with following commands on host (perhaps iptable_nat module missing):
Code:
modprobe iptable_nat
echo 'iptable_nat' >> /etc/modules
 
Last edited:
Code:
Oct 22 20:52:23 Docker dockerd[1236]: time="2021-10-22T20:52:23.165875350Z" level=error msg="exec: \"fuse-overlayfs\": executable file not found in $PATH" storage-driver=fuse-overlayfs
Sounds like you maybe need to enable the fuse, nesting and keyctl features under "YourLXC -> Options -> Features"? Atleast I needed to enable keyctl, nesting and net.ipv4.ip_forward=1 to get docker working on my debian LXC.
 
Similar problem here. After upgrading to Proxmox 7, docker in a CT no longer starts. I find with docker problems, running dockerd without options gives the most usable debugging output:

Code:
root@docker ~# /usr/sbin/dockerd
INFO[2021-10-24T11:58:17.515590860Z] libcontainerd: started new docker-containerd process  pid=732
INFO[2021-10-24T11:58:17.515819980Z] parsed scheme: "unix"                         module=grpc
INFO[2021-10-24T11:58:17.515850244Z] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2021-10-24T11:58:17.515912421Z] ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}]  module=grpc
INFO[2021-10-24T11:58:17.515949636Z] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2021-10-24T11:58:17.516022614Z] pickfirstBalancer: HandleSubConnStateChange: 0xc00084c6d0, CONNECTING  module=grpc
INFO[2021-10-24T11:58:17.536228152Z] starting containerd                           revision=9754871865f7fe2f4e74d43e2fc7ccd237edcbce version=18.09.1
INFO[2021-10-24T11:58:17.536711052Z] loading plugin "io.containerd.content.v1.content"...  type=io.containerd.content.v1
INFO[2021-10-24T11:58:17.536765658Z] loading plugin "io.containerd.snapshotter.v1.btrfs"...  type=io.containerd.snapshotter.v1
WARN[2021-10-24T11:58:17.537018583Z] failed to load plugin io.containerd.snapshotter.v1.btrfs  error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
INFO[2021-10-24T11:58:17.537060371Z] loading plugin "io.containerd.snapshotter.v1.aufs"...  type=io.containerd.snapshotter.v1
WARN[2021-10-24T11:58:17.538090536Z] failed to load plugin io.containerd.snapshotter.v1.aufs  error="aufs is not supported"
INFO[2021-10-24T11:58:17.538121956Z] loading plugin "io.containerd.snapshotter.v1.native"...  type=io.containerd.snapshotter.v1
INFO[2021-10-24T11:58:17.538176223Z] loading plugin "io.containerd.snapshotter.v1.overlayfs"...  type=io.containerd.snapshotter.v1
INFO[2021-10-24T11:58:17.538420700Z] loading plugin "io.containerd.snapshotter.v1.zfs"...  type=io.containerd.snapshotter.v1
WARN[2021-10-24T11:58:17.538711526Z] failed to load plugin io.containerd.snapshotter.v1.zfs  error="exec: \"zfs\": executable file not found in $PATH: \"zfs zfs get -Hp all poolz/vmdata/subvol-252-disk-0\" => "
INFO[2021-10-24T11:58:17.538750850Z] loading plugin "io.containerd.metadata.v1.bolt"...  type=io.containerd.metadata.v1
WARN[2021-10-24T11:58:17.538781526Z] could not use snapshotter btrfs in metadata plugin  error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
WARN[2021-10-24T11:58:17.538803555Z] could not use snapshotter aufs in metadata plugin  error="aufs is not supported"
WARN[2021-10-24T11:58:17.538819058Z] could not use snapshotter zfs in metadata plugin  error="exec: \"zfs\": executable file not found in $PATH: \"zfs zfs get -Hp all poolz/vmdata/subvol-252-disk-0\" => "
INFO[2021-10-24T11:58:17.539126295Z] loading plugin "io.containerd.differ.v1.walking"...  type=io.containerd.differ.v1
INFO[2021-10-24T11:58:17.539160651Z] loading plugin "io.containerd.gc.v1.scheduler"...  type=io.containerd.gc.v1
INFO[2021-10-24T11:58:17.539210865Z] loading plugin "io.containerd.service.v1.containers-service"...  type=io.containerd.service.v1
INFO[2021-10-24T11:58:17.539239548Z] loading plugin "io.containerd.service.v1.content-service"...  type=io.containerd.service.v1
INFO[2021-10-24T11:58:17.539280580Z] loading plugin "io.containerd.service.v1.diff-service"...  type=io.containerd.service.v1
INFO[2021-10-24T11:58:17.539327931Z] loading plugin "io.containerd.service.v1.images-service"...  type=io.containerd.service.v1
INFO[2021-10-24T11:58:17.539373223Z] loading plugin "io.containerd.service.v1.leases-service"...  type=io.containerd.service.v1
INFO[2021-10-24T11:58:17.539419597Z] loading plugin "io.containerd.service.v1.namespaces-service"...  type=io.containerd.service.v1
INFO[2021-10-24T11:58:17.539464400Z] loading plugin "io.containerd.service.v1.snapshots-service"...  type=io.containerd.service.v1
INFO[2021-10-24T11:58:17.539510731Z] loading plugin "io.containerd.runtime.v1.linux"...  type=io.containerd.runtime.v1
INFO[2021-10-24T11:58:17.539679391Z] loading plugin "io.containerd.runtime.v2.task"...  type=io.containerd.runtime.v2
INFO[2021-10-24T11:58:17.539771004Z] loading plugin "io.containerd.monitor.v1.cgroups"...  type=io.containerd.monitor.v1
INFO[2021-10-24T11:58:17.540292434Z] loading plugin "io.containerd.service.v1.tasks-service"...  type=io.containerd.service.v1
INFO[2021-10-24T11:58:17.540334124Z] loading plugin "io.containerd.internal.v1.restart"...  type=io.containerd.internal.v1
INFO[2021-10-24T11:58:17.540394312Z] loading plugin "io.containerd.grpc.v1.containers"...  type=io.containerd.grpc.v1
INFO[2021-10-24T11:58:17.540438579Z] loading plugin "io.containerd.grpc.v1.content"...  type=io.containerd.grpc.v1
INFO[2021-10-24T11:58:17.540499564Z] loading plugin "io.containerd.grpc.v1.diff"...  type=io.containerd.grpc.v1
INFO[2021-10-24T11:58:17.540545852Z] loading plugin "io.containerd.grpc.v1.events"...  type=io.containerd.grpc.v1
INFO[2021-10-24T11:58:17.540589869Z] loading plugin "io.containerd.grpc.v1.healthcheck"...  type=io.containerd.grpc.v1
INFO[2021-10-24T11:58:17.540632233Z] loading plugin "io.containerd.grpc.v1.images"...  type=io.containerd.grpc.v1
INFO[2021-10-24T11:58:17.540664116Z] loading plugin "io.containerd.grpc.v1.leases"...  type=io.containerd.grpc.v1
INFO[2021-10-24T11:58:17.540701746Z] loading plugin "io.containerd.grpc.v1.namespaces"...  type=io.containerd.grpc.v1
INFO[2021-10-24T11:58:17.540747522Z] loading plugin "io.containerd.internal.v1.opt"...  type=io.containerd.internal.v1
INFO[2021-10-24T11:58:17.540848817Z] loading plugin "io.containerd.grpc.v1.snapshots"...  type=io.containerd.grpc.v1
INFO[2021-10-24T11:58:17.540897092Z] loading plugin "io.containerd.grpc.v1.tasks"...  type=io.containerd.grpc.v1
INFO[2021-10-24T11:58:17.540924048Z] loading plugin "io.containerd.grpc.v1.version"...  type=io.containerd.grpc.v1
INFO[2021-10-24T11:58:17.540945655Z] loading plugin "io.containerd.grpc.v1.introspection"...  type=io.containerd.grpc.v1
INFO[2021-10-24T11:58:17.541221859Z] serving...                                    address=/var/run/docker/containerd/containerd-debug.sock
INFO[2021-10-24T11:58:17.541350019Z] serving...                                    address=/var/run/docker/containerd/containerd.sock
INFO[2021-10-24T11:58:17.541394883Z] containerd successfully booted in 0.005794s
INFO[2021-10-24T11:58:17.546758160Z] pickfirstBalancer: HandleSubConnStateChange: 0xc00084c6d0, READY  module=grpc
INFO[2021-10-24T11:58:17.568397152Z] parsed scheme: "unix"                         module=grpc
INFO[2021-10-24T11:58:17.568452385Z] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2021-10-24T11:58:17.568534047Z] parsed scheme: "unix"                         module=grpc
INFO[2021-10-24T11:58:17.568554970Z] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2021-10-24T11:58:17.568643832Z] ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}]  module=grpc
INFO[2021-10-24T11:58:17.568704892Z] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2021-10-24T11:58:17.568730305Z] ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}]  module=grpc
INFO[2021-10-24T11:58:17.568784216Z] pickfirstBalancer: HandleSubConnStateChange: 0xc0008afc50, CONNECTING  module=grpc
INFO[2021-10-24T11:58:17.568789360Z] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2021-10-24T11:58:17.568958982Z] pickfirstBalancer: HandleSubConnStateChange: 0xc0008afd20, CONNECTING  module=grpc
INFO[2021-10-24T11:58:17.569018776Z] blockingPicker: the picked transport is not ready, loop back to repick  module=grpc
INFO[2021-10-24T11:58:17.569076764Z] pickfirstBalancer: HandleSubConnStateChange: 0xc0008afc50, READY  module=grpc
ERRO[2021-10-24T11:58:17.570978452Z] 'overlay' not found as a supported filesystem on this host. Please ensure kernel is new enough and has overlay support loaded.  storage-driver=overlay2
INFO[2021-10-24T11:58:17.572435163Z] pickfirstBalancer: HandleSubConnStateChange: 0xc0008afd20, READY  module=grpc
ERRO[2021-10-24T11:58:17.573146635Z] AUFS was not found in /proc/filesystems       storage-driver=aufs
ERRO[2021-10-24T11:58:17.574155105Z] 'overlay' not found as a supported filesystem on this host. Please ensure kernel is new enough and has overlay support loaded.  storage-driver=overlay
INFO[2021-10-24T11:58:17.730700771Z] Graph migration to content-addressability took 0.00 seconds
WARN[2021-10-24T11:58:17.730936713Z] Your kernel does not support cgroup memory limit
WARN[2021-10-24T11:58:17.730955400Z] Unable to find cpu cgroup in mounts
WARN[2021-10-24T11:58:17.730967427Z] Unable to find blkio cgroup in mounts
WARN[2021-10-24T11:58:17.730978412Z] Unable to find cpuset cgroup in mounts
WARN[2021-10-24T11:58:17.731022757Z] mountpoint for pids not found
INFO[2021-10-24T11:58:17.731659703Z] stopping healthcheck following graceful shutdown  module=libcontainerd
INFO[2021-10-24T11:58:17.731714094Z] stopping event stream following graceful shutdown  error="context canceled" module=libcontainerd namespace=plugins.moby
INFO[2021-10-24T11:58:17.732146012Z] pickfirstBalancer: HandleSubConnStateChange: 0xc0008afd20, TRANSIENT_FAILURE  module=grpc
INFO[2021-10-24T11:58:17.732196669Z] pickfirstBalancer: HandleSubConnStateChange: 0xc0008afd20, CONNECTING  module=grpc
Error starting daemon: Devices cgroup isn't mounted

Edit (1) ---

So in the end it seems the problem has something to do with the new cgroup thingy. A post on https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=939539 ends by saying:

Code:
It appears that even though cgroupfs-mount is required by the docker.io
package, either the cgroupfs-mount package or else the docker.io package
is not setting things up properly with the cgroup system mounts and they
are not getting mounted at boot time.

This is already from 2019, so probably there is a fix somehow, but this is a bit too deep for my paygrade. Hopefully someone wiser will kick in with a solution.

Edit (2) ---

Discussion on the subject here https://github.com/docker/cli/issues/2104 suggests installing cgroupfs-mount, then run cgroupfs-umount followed by cgroupfs-mount, but that just throws another bunch of errors. Probably related to it being a CT....

Code:
root@docker ~# cgroupfs-umount
rmdir: failed to remove 'init.scope': Device or resource busy
rmdir: failed to remove 'system.slice': Device or resource busy
root@docker ~# cgroupfs-mount
grep: /etc/fstab: No such file or directory
mount: /sys/fs/cgroup/cpuset: permission denied.
mount: /sys/fs/cgroup/cpu: permission denied.
mount: /sys/fs/cgroup/cpuacct: permission denied.
mount: /sys/fs/cgroup/blkio: permission denied.
mount: /sys/fs/cgroup/memory: permission denied.
mount: /sys/fs/cgroup/devices: permission denied.
mount: /sys/fs/cgroup/freezer: permission denied.
mount: /sys/fs/cgroup/net_cls: permission denied.
mount: /sys/fs/cgroup/perf_event: permission denied.
mount: /sys/fs/cgroup/net_prio: permission denied.
mount: /sys/fs/cgroup/hugetlb: permission denied.
mount: /sys/fs/cgroup/pids: permission denied.
mount: /sys/fs/cgroup/rdma: permission denied.

Again, way above my paygrade - I'm just dabbling in the dark. Hope someone else can suggest a fix as all my docker containers are dead in the water....

Edit (3) --

Problem solved here - https://forum.proxmox.com/threads/d...an-bullseye-cgroup-problem.94596/#post-411329
 
Last edited:
Solution at the end

Yeah I got these problems as well :(
Same. I have an "old" (pre PVE 7) LXC Docker CT running, which is doing just fine. I wanted to create another CT for Docker, so I installed docker.io and got this error.

Bash:
Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: process_linux.go:495: container init caused: process_linux.go:458: setting cgroup config for procHooks process caused: can't load program: operation not permitted: unknown

In another thread I saw something about cgroupfs-mount, output is:
Bash:
root@:~/# cgroupfs-mount
mount: /sys/fs/cgroup/cpuset: permission denied.
mount: /sys/fs/cgroup/cpu: permission denied.
mount: /sys/fs/cgroup/cpuacct: permission denied.
mount: /sys/fs/cgroup/blkio: permission denied.
mount: /sys/fs/cgroup/memory: permission denied.
mount: /sys/fs/cgroup/devices: permission denied.
mount: /sys/fs/cgroup/freezer: permission denied.
mount: /sys/fs/cgroup/net_cls: permission denied.
mount: /sys/fs/cgroup/perf_event: permission denied.
mount: /sys/fs/cgroup/net_prio: permission denied.
mount: /sys/fs/cgroup/hugetlb: permission denied.
mount: /sys/fs/cgroup/pids: permission denied.
mount: /sys/fs/cgroup/rdma: permission denied.

cgroupfs-mount doesn't even exist in the old container, so I guess it didn't get "migrated" or something to cgroup2? I also don't wanna switch to the old solution (because of the said problem with upcoming releases not supporting it at all), nor do I want a VM - a LXC is doing perfectly fine (until now) with keyctl+nesting, performance is superb and a VM would have that huge virtualization overhead I don't want.

But I don't know at whom I should file a bug report - I mean, I guess it's a bug and not intended, eh? The lxc team, docker, proxmox itself? I'm not entirely what exactly is the issue here, or where.


Edit: Okay, it seems I figured it out. Short story: the bug was on Docker's side and they seem to fixed it somewhere > 20.10.6. Confirmed working in 20.10.7 and 20.10.11. Don't use docker.io on debian for now, which is still on 20.10.5 - install according this page using docker's repository or use Ubuntu >20 and use docker.io, which has 20.10.7. Doesn't even require keyctl as before; unprivileged LXC with nesting enabled it enough.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!