lxc with docker have issues on proxmox 7 (aufs failed: driver not supported)

ilia987

Member
Sep 9, 2019
236
10
23
35
after long upgrade of proxmox and ceph

this is the ouput of dockerd -D:

Code:
DEBU[2021-10-12T12:59:20.229834269Z] [graphdriver] priority list: [btrfs zfs overlay2 aufs overlay devicemapper vfs]
ERRO[2021-10-12T12:59:20.230967397Z] AUFS was not found in /proc/filesystems       storage-driver=aufs
ERRO[2021-10-12T12:59:20.230986711Z] [graphdriver] prior storage driver aufs failed: driver not supported
DEBU[2021-10-12T12:59:20.231296580Z] Cleaning up old mountid : start.

any idea what i can do ?
 

Stefan_R

Proxmox Staff Member
Staff member
Jun 4, 2019
1,300
275
88
Vienna
Well, is 'aufs' loaded on the host kernel (lsmod)? Have you tried with overlay2?

Also, please post your container config (pct config <vmid>).
 

ilia987

Member
Sep 9, 2019
236
10
23
35
Well, is 'aufs' loaded on the host kernel (lsmod)? Have you tried with overlay2?
Well, is 'aufs' loaded on the host kernel (lsmod)? Have you tried with overlay2?
No


Also, please post your container config (pct config <vmid>).
lxc host is ubuntu 18.04
Code:
arch: amd64
cores: 16
cpulimit: 4
hostname: docker1
memory: 8192
net0: name=eth0,bridge=vmbr0,gw=xxxxxxxxx.254,hwaddr=xxxxxx,ip=xxxxxxxx/22,type=veth
onboot: 1
ostype: ubuntu
rootfs: ceph-lxc:vm-126-disk-0,size=50G
startup: order=4
swap: 0
lxc.apparmor.profile: unconfined
lxc.cgroup.devices.allow: a
lxc.cap.drop:

for a workaround we created new ubuntu and recreated dockers from automated script.
 

Stefan_R

Proxmox Staff Member
Staff member
Jun 4, 2019
1,300
275
88
Vienna
You probably needed the nesting feature, which is now the default for new containers.
 

ilia987

Member
Sep 9, 2019
236
10
23
35
added nesting 1. rebooted and still not working:

dockerd -D
Code:
INFO[2021-10-13T13:16:34.123990881Z] Starting up                                 
DEBU[2021-10-13T13:16:34.124551737Z] Listener created for HTTP on unix (/var/run/docker.sock)
DEBU[2021-10-13T13:16:34.125297020Z] Golang's threads limit set to 1855620       
INFO[2021-10-13T13:16:34.125875767Z] parsed scheme: "unix"                         module=grpc
INFO[2021-10-13T13:16:34.125913889Z] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2021-10-13T13:16:34.125941868Z] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}  module=grpc
INFO[2021-10-13T13:16:34.125953643Z] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2021-10-13T13:16:34.127279882Z] parsed scheme: "unix"                         module=grpc
INFO[2021-10-13T13:16:34.127298955Z] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2021-10-13T13:16:34.127316820Z] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}  module=grpc
INFO[2021-10-13T13:16:34.127325640Z] ClientConn switching balancer to "pick_first"  module=grpc
DEBU[2021-10-13T13:16:34.129028389Z] Using default logging driver json-file       
DEBU[2021-10-13T13:16:34.129157612Z] [graphdriver] priority list: [btrfs zfs overlay2 aufs overlay devicemapper vfs]
DEBU[2021-10-13T13:16:34.129704732Z] processing event stream                       module=libcontainerd namespace=plugins.moby
ERRO[2021-10-13T13:16:34.130595768Z] AUFS was not found in /proc/filesystems       storage-driver=aufs
ERRO[2021-10-13T13:16:34.130645878Z] [graphdriver] prior storage driver aufs failed: driver not supported
DEBU[2021-10-13T13:16:34.130980119Z] Cleaning up old mountid : start.             
failed to start daemon: error initializing graphdriver: driver not supported
 

olidal

New Member
Mar 26, 2021
6
1
3
I am stumbling on the same issue here:

it turns that the Ubuntu kernel on which the PVE 7 kernel is based has stopped distributing the aufs-dkms package and recommends using the upstream supported overlay fs instead:

https://www.debian.org/releases/bul...ormation.en.html#noteworthy-obsolete-packages

This is bad because the two are NOT equivalent. Especially, overlay fs does not work with zfs.
Even though AUFS is still actively supported.

I am currently trying to recompile a 5.11 kernel with the aufs-dkms module and I wouldnt mind a bit of help for that.

Is the proxmox kernel using the stock ubuntu kernel ? (Impish ?)
If not, where can I find the sources (and patches) of the latest PVE 7 kernel?

thanks for any help.
 

olidal

New Member
Mar 26, 2021
6
1
3
I am stumbling on the same issue here:

it turns that the Ubuntu kernel on which the PVE 7 kernel is based has stopped distributing the aufs-dkms package and recommends using the upstream supported overlay fs instead:

https://www.debian.org/releases/bul...ormation.en.html#noteworthy-obsolete-packages

This is bad because the two are NOT equivalent. Especially, overlay fs does not work with zfs.
Even though AUFS is still actively supported.

I am currently trying to recompile a 5.11 kernel with the aufs-dkms module and I wouldnt mind a bit of help for that.

Is the proxmox kernel using the stock ubuntu kernel ? (Impish ?)
If not, where can I find the sources (and patches) of the latest PVE 7 kernel?

thanks for any help.
I have made progress on this issue.

It seems that (the Great! ) Proxmox folks have simply not selected the aufs module when they recompiled the ubuntu kernel their way.

However, my previous statement was partly wrong: even though Debian has announced that they will stop distributing AUFS, the module is still included in the Ubuntu kernel on which the proxmox kernel is built, and the module can be selected as module apparently with no issue.

In order to test I have been compiling kernels the whole week-end, and finally managed to recompile the Ubuntu-5.11.0-41.45 kernel that is used as base for the pve-kernel-5.11.22-7 package, using the same config (as found in /boot), but with the addition of the aufs module AND ZFS (the latter is a bit tricky, the only solution I found was to compile ZFS as a separate project after recompiling the Ubuntu kernel, and then inject and repackage the zfs module(s) in the kernel .deb package).

After installing the recompiled kernel in my PVE7 instance, I still had to create an initrd using update-initramfs and then install the new kernel+initrd using the proxmox-boot-tool.

I succesfully rebooted my zfs-rooted PVE7 on this new kernel and was able to load the AUFS module. I dindt test for long but it seems to be perfectiy working.

Even though I believe I could automate the process, eg. using ansible, this seems awfully complicated.

Hence my question: is there a particular reason why the AUFS module was not selected in the latest kernel package or is this just an unfortunate mistake?

And in the latter case, would you mind re-including this module? (Please :) )

Thanks in advance!
Olivier
 
Last edited:
  • Like
Reactions: elBradford

kamzata

Active Member
Jan 21, 2011
179
7
38
Venezia - Italy
Any news on this? I'm trying to use Docker on a LXC Container (Ubuntu 20.04 LTS) on Proxmox 7.0-11 with ZFS filesystem. It seems to work using "nesting" and "keyctl" options but the disk space grows exponentially and becomes unmanageable. Basically it's not usable.
 

elBradford

Member
Sep 9, 2016
16
3
23
bradford.la
I have made progress on this issue.

It seems that (the Great! ) Proxmox folks have simply not selected the aufs module when they recompiled the ubuntu kernel their way.

However, my previous statement was partly wrong: even though Debian has announced that they will stop distributing AUFS, the module is still included in the Ubuntu kernel on which the proxmox kernel is built, and the module can be selected as module apparently with no issue.

In order to test I have been compiling kernels the whole week-end, and finally managed to recompile the Ubuntu-5.11.0-41.45 kernel that is used as base for the pve-kernel-5.11.22-7 package, using the same config (as found in /boot), but with the addition of the aufs module AND ZFS (the latter is a bit tricky, the only solution I found was to compile ZFS as a separate project after recompiling the Ubuntu kernel, and then inject and repackage the zfs module(s) in the kernel .deb package).

After installing the recompiled kernel in my PVE7 instance, I still had to create an initrd using update-initramfs and then install the new kernel+initrd using the proxmox-boot-tool.

I succesfully rebooted my zfs-rooted PVE7 on this new kernel and was able to load the AUFS module. I dindt test for long but it seems to be perfectiy working.

Even though I believe I could automate the process, eg. using ansible, this seems awfully complicated.

Hence my question: is there a particular reason why the AUFS module was not selected in the latest kernel package or is this just an unfortunate mistake?

And in the latter case, would you mind re-including this module? (Please :) )

Thanks in advance!
Olivier
This is great. Hope a staff member responds, especially since you did all of the troubleshooting for them...
 

Neuer_User

Member
Jan 5, 2016
14
0
21
54
I stumbled over the same problem when upgrading my Proxmox 6 installation.
I will now try to recompile the kennel to see if I get this working again.
 

styx-tdo

Member
Mar 28, 2010
6
0
21
Upvote from me. My vaultwarden did not like 7.1. At all... :/

Please add this back in the official kernel
 

Neuer_User

Member
Jan 5, 2016
14
0
21
54
I have made progress on this issue.

It seems that (the Great! ) Proxmox folks have simply not selected the aufs module when they recompiled the ubuntu kernel their way.

However, my previous statement was partly wrong: even though Debian has announced that they will stop distributing AUFS, the module is still included in the Ubuntu kernel on which the proxmox kernel is built, and the module can be selected as module apparently with no issue.

In order to test I have been compiling kernels the whole week-end, and finally managed to recompile the Ubuntu-5.11.0-41.45 kernel that is used as base for the pve-kernel-5.11.22-7 package, using the same config (as found in /boot), but with the addition of the aufs module AND ZFS (the latter is a bit tricky, the only solution I found was to compile ZFS as a separate project after recompiling the Ubuntu kernel, and then inject and repackage the zfs module(s) in the kernel .deb package).

After installing the recompiled kernel in my PVE7 instance, I still had to create an initrd using update-initramfs and then install the new kernel+initrd using the proxmox-boot-tool.

I succesfully rebooted my zfs-rooted PVE7 on this new kernel and was able to load the AUFS module. I dindt test for long but it seems to be perfectiy working.

Even though I believe I could automate the process, eg. using ansible, this seems awfully complicated.

Hence my question: is there a particular reason why the AUFS module was not selected in the latest kernel package or is this just an unfortunate mistake?

And in the latter case, would you mind re-including this module? (Please :) )

Thanks in advance!
Olivier
Are you sure about that? The proxmox kernel builds on top of ubuntu-impish. Looking at the kernel tree of the ubuntu-impish kernel 5.13, I do not see any aufs sources in the tree:

https://kernel.ubuntu.com/git/ubuntu/ubuntu-impish.git/tree/fs

To me that looks as if the fault is with ubuntu. I do not see a way to "simply reselect the module and have it built". Maybe we need to build the module out of tree using the module source and the kernel headers?

P.S.: I switched to the 5.11 kernel branch and there, indeed, are the aufs sources in the tree. So, your analysis fits for the 5.11 kernel, but unfortunately not for the 5.13.
 
Last edited:

Neuer_User

Member
Jan 5, 2016
14
0
21
54
As aufs cannot easily be compiled as a module (it needs several patches to the whole kernel), I gave up with compiling my own kernel module.
I found a workaround, which would work, but which I not really like (using ext4 for the docker volumes). Therefore, I will downgrade back to Proxmox 6.4 and hope that there will be a better solution next year before EOL of Proxmox 6.
 

Neuer_User

Member
Jan 5, 2016
14
0
21
54
P.S.: Did anyone test running a 5.4 kernel (from the Proxmox 6 series) on Proxmox 7.1 ? Maybe that could also be a viable way until there is a final solution?
 

olidal

New Member
Mar 26, 2021
6
1
3
Hi all!

After lots of trial and errors, I finally managed to set up a toolchain to recompile our own recent kernels with AUFS and ZFS and install them in PVE6.X (And now that I have it working, I guess I can safely upgrade our machines to PVE7.X without risking service disruption).

First, what did not work: I tried to compile the same 5.11 kernels as PVE7.X for PVE6.4 (backport). I found the corresponding sources from the ubuntu PPA: https://launchpad.net/ubuntu/+source/linux/
Unfortunately, the 5.11 kernel that was already backported to PVE6.4 is now EOL, so it is probably not a good idea to waste time on this version.
And with newer kernels, a more general issue is that recompiling ubuntu kernels for debian is tricky because both distribs have now diverged since ubuntu adopted the zstd compression format for its deb packages that is not yet supported by debian. And compiling ZFS using the ubuntu toolchain ends up with a failed attempt to install an ubuntu package. Even though we could try to repackage with a supported compression, or maybe cross-compile from an ubuntu platform, I decided this was maybe a dead end.
(I am curious to know how the Proxmox team is going to deal with that issue though.)

What ended up working, is using the excellent pve edge kernel project by Fabian Mastenbroek.
After I succeeded applying the recipe with 5.11, I went straigth to the 5.15 kernel which is the new LTS, so I can easily recompile as new patches and updates come out, for quite some time.
The toolchain provided by Fabian already comes with ZFS included, so it is just a question of adding AUFS using the original project of Junjiro R. Okajima (see here for 15.5+ kernel : https://github.com/sfjro/aufs5-standalone/tree/aufs5.15.5)
I just added the AUFS patches to the debian/patches list, reviewed/adjusted a few parameters and followed instructions from AUFS and PVE edge kernel projects, and it finally compiled... after a few trial and errors.

One thing to understand too, is the crack.bundle file (a git bundle) that is found at the root of the pve-edge-kernel project. It is linked to the particular version of upstream kernel being chosen. So for a newer/latest version, it has to be replaced. The corresponding crack.bundle for a given kernel version can be found ... on the ubuntu PPA. Eg, here for the v5.15.24 kernel: https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.15.24/
And the linux git module in the pve-edge-kernel project has to be updated/replaced accordingly with the right version.

Hope this helps!
Olivier
 

Iznogooood

New Member
Mar 8, 2022
3
1
3
64
Olivier,
What you describe is not meant for the standard Proxmox enthusiast.
Is there a way to get your modified kernel with an explanation on how to install it?

That would udgely help.
Jean-Marc
 

fiveangle

New Member
Dec 23, 2020
22
2
3
San Francisco Bay Area
From reviewing the docker ZFS driver detail documentation (https://docs.docker.com/storage/storagedriver/zfs-driver/) it seemed clear from the error that docker was not using the ZFS driver for /var/lib/docker when being nested on a ZFS-backed LXC as it should be. As a work around we force docker into divorcing the /var/lib/docker tree from being handled by the default unioning layer and handle it by the ZFS driver directly:

  1. set the docker services to disabled in the LXC container (e.g. systemctl stop docker; systemctl disable docker)
  2. rename your existing /var/lib/docker dir to something else like docker_old
  3. stop the LXC container
  4. in Proxmox gui create a new storage mount point at /var/lib/docker for the container from your ZFS thin zpool
  5. start the container and move all contents within your renamed /var/lib/docker_old dir to the new mount point at /var/lib/docker
  6. re-enable and restart docker services (e.g. systemctl enable docker; systemctl start docker)

Enjoy,

-=dave
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!