LXC ZFS + docker overlay2 driver

Kiwwiaq

Member
Oct 13, 2020
3
2
6
39
Hi,

I have been updating my LXC template and found these good news.

It looks like that overlay2 storage driver for docker running on LXC on ZFS now works and fuse-overlayfs is not needed anymore. The container is unprivileged, fuse=1, nesting=1 to support fuse-overlayfs driver. I have removed container option fuse=1 and docker with test container seams to run just fine.

Code:
root@template:~# lsb_release -a
No LSB modules are available.
Distributor ID:    Ubuntu
Description:    Ubuntu 22.04.1 LTS
Release:    22.04
Codename:    jammy
root@template:~# uname -a
Linux template 6.1.6-1-pve #1 SMP PREEMPT_DYNAMIC PVE 6.1.6-1 (2023-01-28T00:00Z) x86_64 x86_64 x86_64 GNU/Linux
root@template:~# df -h
Filesystem                    Size  Used Avail Use% Mounted on
rpool/data/subvol-101-disk-0   10G  998M  9.1G  10% /
none                          492K  4.0K  488K   1% /dev
tmpfs                          32G  8.0K   32G   1% /dev/shm
tmpfs                          13G  232K   13G   1% /run
tmpfs                         5.0M     0  5.0M   0% /run/lock
tmpfs                         6.3G     0  6.3G   0% /run/user/1001
overlay                        10G  998M  9.1G  10% /var/lib/docker/overlay2/eb6067d016023a70ccbc757083bb0d8464958d03cf9910b2556a65f88b3d156a/merged
root@template:~# docker ps
CONTAINER ID   IMAGE              COMMAND                  CREATED          STATUS          PORTS                                     NAMES
b7e9a0be6f49   nginxdemos/hello   "/docker-entrypoint.…"   12 minutes ago   Up 12 minutes   0.0.0.0:32768->80/tcp, :::32768->80/tcp   zen_ganguly
root@template:~# docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.10.2
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.16.0
    Path:     /usr/libexec/docker/cli-plugins/docker-compose
  scan: Docker Scan (Docker Inc.)
    Version:  v0.23.0
    Path:     /usr/libexec/docker/cli-plugins/docker-scan

Server:
 Containers: 3
  Running: 1
  Paused: 0
  Stopped: 2
 Images: 2
 Server Version: 23.0.1
 Storage Driver: overlay2
  Backing Filesystem: zfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: false
  userxattr: true
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 31aa4358a36870b21a992d3ad2bef29e1d693bec
 runc version: v1.1.4-0-g5fd4c4d
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 6.1.6-1-pve
 Operating System: Ubuntu 22.04.1 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 1
 Total Memory: 512MiB
 Name: template
 ID: 0263e5c6-91c8-4e94-92dd-a62c1f3e8e2d
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 HTTP Proxy: http://proxy:8080/
 HTTPS Proxy: http://proxy:8080/
 Registry: https://index.docker.io/v1/
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

Code:
root@czbrqnode01:~# pveversion --verbose
proxmox-ve: 7.3-1 (running kernel: 6.1.6-1-pve)
pve-manager: 7.3-6 (running version: 7.3-6/723bb6ec)
pve-kernel-6.1: 7.3-3
pve-kernel-helper: 7.3-3
pve-kernel-5.15: 7.3-2
pve-kernel-5.19: 7.2-15
pve-kernel-5.13: 7.1-9
pve-kernel-5.4: 6.4-5
pve-kernel-6.1.6-1-pve: 6.1.6-1
pve-kernel-6.1.0-1-pve: 6.1.0-1
pve-kernel-5.19.17-2-pve: 5.19.17-2
pve-kernel-5.19.17-1-pve: 5.19.17-1
pve-kernel-5.15.85-1-pve: 5.15.85-1
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.4.128-1-pve: 5.4.128-2
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.3
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.3-1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-2
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-5
libpve-network-perl: 0.7.2
libpve-storage-perl: 7.3-2
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-1
lxcfs: 5.0.3-pve1
novnc-pve: 1.3.0-3
openvswitch-switch: 2.15.0+ds1-2+deb11u2.1
proxmox-backup-client: 2.3.2-1
proxmox-backup-file-restore: 2.3.2-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.0-1
proxmox-widget-toolkit: 3.5.5
pve-cluster: 7.3-2
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-7
pve-firmware: 3.6-3
pve-ha-manager: 3.5.1
pve-i18n: 2.8-2
pve-qemu-kvm: 7.1.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1

What a nice Sunday!
Ivan
 
I do not see an issue...

I have just spawned random CT cloned (full clone) from my default ubuntu 22.04 template, upgraded all packages to latest and just called the docker to do the job. Work like a charm. The template is having just salt packages installed to install docker form official repos, user creation, sudo, etc...

Code:
root@plex-test:~# date
Sun Feb 19 01:54:44 PM CET 2023
root@plex-test:~# docker run -d --name plex-test -v /plex/database:/config -v /plex/transcode:/transcode -v /plex/media:/data linuxserver/plex:latest
Unable to find image 'linuxserver/plex:latest' locally
latest: Pulling from linuxserver/plex
d2f83cd07e8a: Pull complete
665a26860e09: Pull complete
a51681ef853e: Pull complete
94601407af37: Pull complete
73482616b689: Pull complete
6a0903ba30b4: Pull complete
2c6b3a15ced0: Pull complete
Digest: sha256:b7046554d3af664280c6fb2ae76341d8777e1b32a74d6441ec13e67f67773e79
Status: Downloaded newer image for linuxserver/plex:latest
9c53e0795586f61d3f7a689ee54326e843bd971a8be317aa99ee60ec73567246
root@plex-test:~# docker ps
CONTAINER ID   IMAGE                     COMMAND   CREATED          STATUS         PORTS                                                                                      NAMES
9c53e0795586   linuxserver/plex:latest   "/init"   14 seconds ago   Up 3 seconds   1900/udp, 3005/tcp, 8324/tcp, 5353/udp, 32410/udp, 32400/tcp, 32412-32414/udp, 32469/tcp   plex-test

I am still having overlay module loaded on VM host, but having no chance to fidle with it more right now.

Code:
root@node01:~# cat /etc/modules-load.d/modules.conf
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
overlay
root@node01:~# lsmod | grep -i overlay
overlay               155648  2

Other than that, it is just 7.3-6 PVE no-sub with latest PVE 6.1 kernel from ZFS root and data disks.
 
Last edited:
Well, I have found a software that is not working for me as well. It is strange, that I could run the plex image @ericfrol is having issue with, but nexcloud image not.

Code:
failed to register layer: ApplyLayer exit status 1 stdout:  stderr: unlinkat /usr/src/nextcloud/apps/updatenotification/templates: invalid argument

As soon as I have switched docker storage driver back to fuse-overlayfs, nextcloud image started to work. Switching the CT to priviledged is spawning permission AppArmor issues for me, that is not convenient enough for me to fix in CT config.
 
Yes I noticed this as well. Seems to be a new feature from Docker v23. Very convenient.

I had Docker v23 with 5.x Kernel, and it never worked, it always reverted to vfs so I used fuse-overlayfs.

I upgraded PVE nodes kernel to 6.1.10 (it's opt-in for now), removed fuse-overlayfs, and after rebooting the nodes, the docker LXC container on ZFS finally showed the proper support for overlay2 with zfs, without any specific configuration. Seems to be working fine for now.

Code:
Kernel Version: 6.1.10-1-pve
 Operating System: Debian GNU/Linux 11 (bullseye)
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 8GiB

Server:
 Containers: 29
  Running: 29
  Paused: 0
  Stopped: 0
 Images: 29
 Server Version: 23.0.1
 Storage Driver: overlay2
  Backing Filesystem: zfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 
I upgraded PVE nodes kernel to 6.1.10 (it's opt-in for now), removed fuse-overlayfs, and after rebooting the nodes, the docker LXC container on ZFS finally showed the proper support for overlay2 with zfs, without any specific configuration. Seems to be working fine for now.

How do you remove the fuse-overlayfs in your LXC Container? Do you have to update you LXC Container to a new version?
 
I have ZFS on my storage pool for lxc containers and I have a few LXC containers for docker with different Storage Driver:

I guess I want overlay2 for all of them?

For example, LXC 126 has these options:
Code:
features: nesting=1,keyctl=1
unprivileged: 1
lxc.cgroup2.devices.allow: a
lxc.cap.drop:

and gives me Storage Driver: overlay2

For container 111, I have
Code:
features: nesting=1,keyctl=1
unprivileged: 1
and
Code:
cat /etc/docker/daemon.json
{
      "storage-driver": "vfs"
}

Should I specify "storage-driver": "overlay2" here?

Should I add:
Code:
lxc.cgroup2.devices.allow: a
lxc.cap.drop:

For 112 I have
Code:
features: fuse=1,keyctl=1,nesting=1
unprivileged: 1
lxc.cgroup2.devices.allow: a
lxc.cap.drop:
Resulting in: Storage Driver: fuse-overlayfs

Should I remove the fuse=1?

Any help would be appreciated!
 
Last edited:
Code:
cat /etc/docker/daemon.json
{
      "storage-driver": "vfs"
}

Should I specify "storage-driver": "overlay2" here?

don't specify the driver (remove the daemon.json setting) and remove fuse=1. If you use the latest kernels docker should finally choose the overlay2 driver instead of the obsolete and inefficient vfs driver.

I use privileged containers, you seem to use both privileged and unprivileged, you need to experiment with that one.
 
T
don't specify the driver (remove the daemon.json setting) and remove fuse=1. If you use the latest kernels docker should finally choose the overlay2 driver instead of the obsolete and inefficient vfs driver.

I use privileged containers, you seem to use both privileged and unprivileged, you need to experiment with that one.
For the lxc that had fuse, I cannot get it to work with overlay2. I now have 2 lxc on overlay2.

The fuse one now will not start, and I don't know what to do next.

This is the error:

Code:
Mar 14 06:18:12 whisper dockerd[508]: time="2023-03-14T06:18:12.024544396Z" level=info msg="Starting up"
Mar 14 06:18:12 whisper dockerd[508]: time="2023-03-14T06:18:12.025545082Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Mar 14 06:18:12 whisper dockerd[508]: time="2023-03-14T06:18:12.025557999Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Mar 14 06:18:12 whisper dockerd[508]: time="2023-03-14T06:18:12.025588901Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Mar 14 06:18:12 whisper dockerd[508]: time="2023-03-14T06:18:12.025601648Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Mar 14 06:18:12 whisper dockerd[508]: time="2023-03-14T06:18:12.026776428Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Mar 14 06:18:12 whisper dockerd[508]: time="2023-03-14T06:18:12.026791151Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Mar 14 06:18:12 whisper dockerd[508]: time="2023-03-14T06:18:12.026809404Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Mar 14 06:18:12 whisper dockerd[508]: time="2023-03-14T06:18:12.026826688Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Mar 14 06:18:12 whisper dockerd[508]: time="2023-03-14T06:18:12.027387273Z" level=error msg="Failed to GetDriver graph" driver=overlayf2 error="graphdriver plugins are only supported with experimental mode" home-dir=/var/lib/docker
Mar 14 06:18:12 whisper dockerd[508]: failed to start daemon: error initializing graphdriver: driver not supported
Mar 14 06:18:12 whisper systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
 
Code:
Mar 14 06:18:12 whisper dockerd[508]: failed to start daemon: error initializing graphdriver: driver not supported

looks like you have a driver configured, remove it. clean all the customizations you've done, docker installation has to be as clean as possible.

If you can't do it, reinstall docker.
 
looks like you have a driver configured, remove it. clean all the customizations you've done, docker installation has to be as clean as possible.

If you can't do it, reinstall docker.
Thanks, I removed the /var/lib/docker folder and now docker starts and reports overlay2.

However, this is causing havoc, as it makes container file system read-only which is problematic for running my docker compose build?

This is apparently related to overlay2 and I get this error in dmesg:
Code:
[151200.501665] overlayfs: upper fs does not support RENAME_WHITEOUT.
[151200.501693] overlayfs: fs on '/var/lib/docker/overlay2/l/2KGUFAP2LL53WNVI7FYOQ6P5JR' does not support file handles, falling back to xino=off.

Any ideas on this?
 
I removed the /var/lib/docker folder and now docker starts

You removed the whole folder with subfolders?? I think you screwed up docker installation. :)

You had to only remove the driver from the json file, not the entire folder. I don't know how docker is starting without that folder.
 
Docker happily recreates the directory and then run containers fresh. It was a recommended action on the docker docs site, I think. Otherwise the old references to fuse-overlay stick around.

I ended up creating a zfs volume on the proxmox host and formatting it ext4 and mounting it to /var/lib/docker in the lxc.

Now it works.

Code:
Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: false
  userxattr: true
 
I ended up creating a zfs volume on the proxmox host and formatting it ext4 and mounting it to /var/lib/docker in the lxc.

That's one of the old workarounds, it's not what I wanted to achieve. You need to check why on the other 2 dockers it worked with full ZFS and on this one it didn't.
 
I tested my other LXC and it is the same. Maybe you could test for me? In my case, docker on overlay2 fails when building a container when trying to edit write-protected files, even though it is root.

It works fine when pulling a pre-made container.

Could you test building this project? https://codeberg.org/pluja/web-whisper

I have been building with this command docker compose up --build -d and it fails when trying to remove wget or mv the whisper.cpp file.
 
So does anyone know what enabled this to start working? Something in the 6.x kernel series? LXC itself?
 
I just wasted a good hour with this. I had an LXC container with Docker, configured with the usual, nesting and unprivileged, running on a 'raw' disk. Since the underlying FS is ZFS I thought I'd go ahead and move it (volume action, move storage) but when I booted the container back up docker refused to start. Some digging, and I ended up in this thread. I managed to get the container back on raw and getting it working again so it's no panic.

https://github.com/docker/for-linux/issues/1357 suggests that fuse-overlayfs version needs to be >= 0.7, or maybe >= 1.0. - the apt version is 0.3. They suggest compiling the latest version from https://github.com/containers/fuse-overlayfs but I'd rather prefer not going that route.

So what is the best practice method to install docker/zfs/lxc now?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!