Docker in LXC problem after PVE kernel update.

I am getting this error if I try to start my mailserver docker container.

Code:
Starting mail ... error

ERROR: for mail  Cannot start service mail: b'OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \\"write sysctl key kernel.domainname: open /proc/sys/kernel/domainname: permission denied\\"": unknown'

ERROR: for mail  Cannot start service mail: b'OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \\"write sysctl key kernel.domainname: open /proc/sys/kernel/domainname: permission denied\\"": unknown'
ERROR: Encountered errors while bringing up the project.
Guess this is related to this topic.

pve-manager/6.2-4/9824574a (running kernel: 5.4.34-1-pve)

Any idea how to fix this?
 
Confirm: Set this Features does fix it.

Bash:
~# pveversion
pve-manager/6.1-8/806edfe1 (running kernel: 5.3.18-3-pve)


Bash:
$ docker --version
Docker version 19.03.8, build afacb8b7f0
OMG Thank you!! checked on keytl and nesting and it worked. i had this error after installing Portainer: "docker: Error response from daemon: OCI runtime create failed...etc." and this fixed it!
 
As of:
- pve-manager/6.4-8
- kernel: 5.4.114-1-pve
- Docker version 20.10.7

I still get this error, even with nesting and keyctl enabled.

In my case, it was simply removing/commenting hostname from the docker-compose.yml for (in this case) gitlab:
Code:
web:
  image: 'gitlab/gitlab-ce:latest'
  restart: always
  # hostname: 'gl.local.mytld.com'
  environment:
    GITLAB_OMNIBUS_CONFIG: |
      external_url 'https://gl.local.mytld.com'
      ...
      registry_external_url 'https://registry.local.mytld.com'
      ...
  ports:
  ...

I have collected best-practice advice for running Docker inside an unprivileged LXC container in a blog post. So far, it worked flawlessy - this was the first issue I saw (which could be fixed).
 
Last edited:
Hi Helmut101, all,

It seems you run the Docker in LXC without kritical issues.

Could you do me a favor? Do you get CPU and Memory in docker stats? Since a while I see value zero only. It was working earlier. I guess it's related to cgroups but I didn't find how to solve the issue.

statement: docker stats CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 149ca23f9147 fhem 0.00% 0B / 0B 0.00% 482MB / 205MB 0B / 0B 0 64bf92bf228d renderer 0.00% 0B / 0B 0.00% 6.26MB / 145kB 0B / 0B 0 d9fd0b7d1912 mosquitto 0.00% 0B / 0B 0.00% 21kB / 0B 0B / 0B 0 938bcdd26eeb grafana 0.00% 0B / 0B 0.00% 36.3MB / 26.4MB 0B / 0B 0

I tested and played with cgroups, I assume that's the issue. Please let me know your output

I tested the parameters for cgroups already but it wasn't showing CPU/mem nighter.
GRUB_CMDLINE_LINUX="systemd.unified_cgroup_hierarchy=0"
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"

statement: docker info | grep -i warn WARNING: No memory limit support WARNING: No swap limit support WARNING: No cpu cfs quota support WARNING: No cpu cfs period support WARNING: No cpu shares support WARNING: No cpuset support WARNING: No io.weight support WARNING: No io.weight (per device) support WARNING: No io.max (rbps) support WARNING: No io.max (wbps) support WARNING: No io.max (riops) support WARNING: No io.max (wiops) support

In case anyone as an idea, let me know.

Thank you!
 
Yes, I can run docker stats with the latest versions (proxmox, docker):
Code:
CONTAINER ID   NAME                       CPU %     MEM USAGE / LIMIT     MEM %     NET I/O           BLOCK I/O         PIDS
60c58da63f55   docker_invidious_1         0.02%     343.9MiB / 1GiB       33.58%    340MB / 12MB      238kB / 0B        8
17dccf2eb8ac   docker_postgres_1          0.05%     39.14MiB / 5.953GiB   0.64%     14.6MB / 12MB     721kB / 241MB     9
9a586b4bc516   docker_miniflux_1          0.00%     19.86MiB / 5.953GiB   0.33%     27MB / 35.7MB     11.8MB / 0B       11
3b21fd4d3547   docker_db_1                0.00%     79.34MiB / 5.953GiB   1.30%     34.3MB / 19.8MB   1.94MB / 973MB    8
fd07c263d82a   dunkel-mopidy              0.00%     114.9MiB / 5.953GiB   1.88%     41.3MB / 3.3MB    46MB / 524kB      55
60cd3f74d0ea   iris_snapserver_1          0.67%     5.402MiB / 5.953GiB   0.09%     53.2MB / 42.7MB   7.59MB / 65.5kB   5
faafdeaf05f5   Solaranzeige               0.29%     183.6MiB / 5.953GiB   3.01%     39.2MB / 52.9MB   192MB / 1.95GB    45
1d01dda1fde1   funkwhale_nginx_1          0.00%     65.31MiB / 5.953GiB   1.07%     5.65MB / 593MB    372MB / 8.19kB    3
1bf30bf2fd7c   funkwhale_celeryworker_1   0.04%     355.5MiB / 5.953GiB   5.83%     316MB / 336MB     21.3MB / 0B       5
695ec9c9c8aa   funkwhale_api_1            0.13%     167.9MiB / 5.953GiB   2.75%     40MB / 69MB       29.6MB / 0B       8
d07a43501e3a   funkwhale_celerybeat_1     0.00%     126.8MiB / 5.953GiB   2.08%     552kB / 1.19MB    18.9MB / 14.2MB   1
d76c4c2fab08   funkwhale_postgres_1       0.00%     84.58MiB / 5.953GiB   1.39%     10.1MB / 16.5MB   64.2MB / 642MB    12
82b2cc6dea51   funkwhale_redis_1          0.13%     6.203MiB / 5.953GiB   0.10%     392MB / 339MB     6.38MB / 2.21MB   4

No output from docker info | grep -i warn

I have not done any modifications to cgroups
 
Last edited:
@Helmut101, Thank you. Tested based on your input on a debian buster. Working. I run my host on bullsyeye. I have one container with and one without working stats now. A starting point at least.
 
  • Like
Reactions: Helmut101