lxc with docker have issues on proxmox 7 (aufs failed: driver not supported)

ilia987

Member
Sep 9, 2019
222
7
23
34
after long upgrade of proxmox and ceph

this is the ouput of dockerd -D:

Code:
DEBU[2021-10-12T12:59:20.229834269Z] [graphdriver] priority list: [btrfs zfs overlay2 aufs overlay devicemapper vfs]
ERRO[2021-10-12T12:59:20.230967397Z] AUFS was not found in /proc/filesystems       storage-driver=aufs
ERRO[2021-10-12T12:59:20.230986711Z] [graphdriver] prior storage driver aufs failed: driver not supported
DEBU[2021-10-12T12:59:20.231296580Z] Cleaning up old mountid : start.

any idea what i can do ?
 

Stefan_R

Proxmox Staff Member
Staff member
Jun 4, 2019
1,301
274
88
Vienna
Well, is 'aufs' loaded on the host kernel (lsmod)? Have you tried with overlay2?

Also, please post your container config (pct config <vmid>).
 

ilia987

Member
Sep 9, 2019
222
7
23
34
Well, is 'aufs' loaded on the host kernel (lsmod)? Have you tried with overlay2?
Well, is 'aufs' loaded on the host kernel (lsmod)? Have you tried with overlay2?
No


Also, please post your container config (pct config <vmid>).
lxc host is ubuntu 18.04
Code:
arch: amd64
cores: 16
cpulimit: 4
hostname: docker1
memory: 8192
net0: name=eth0,bridge=vmbr0,gw=xxxxxxxxx.254,hwaddr=xxxxxx,ip=xxxxxxxx/22,type=veth
onboot: 1
ostype: ubuntu
rootfs: ceph-lxc:vm-126-disk-0,size=50G
startup: order=4
swap: 0
lxc.apparmor.profile: unconfined
lxc.cgroup.devices.allow: a
lxc.cap.drop:

for a workaround we created new ubuntu and recreated dockers from automated script.
 

Stefan_R

Proxmox Staff Member
Staff member
Jun 4, 2019
1,301
274
88
Vienna
You probably needed the nesting feature, which is now the default for new containers.
 

ilia987

Member
Sep 9, 2019
222
7
23
34
added nesting 1. rebooted and still not working:

dockerd -D
Code:
INFO[2021-10-13T13:16:34.123990881Z] Starting up                                 
DEBU[2021-10-13T13:16:34.124551737Z] Listener created for HTTP on unix (/var/run/docker.sock)
DEBU[2021-10-13T13:16:34.125297020Z] Golang's threads limit set to 1855620       
INFO[2021-10-13T13:16:34.125875767Z] parsed scheme: "unix"                         module=grpc
INFO[2021-10-13T13:16:34.125913889Z] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2021-10-13T13:16:34.125941868Z] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}  module=grpc
INFO[2021-10-13T13:16:34.125953643Z] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2021-10-13T13:16:34.127279882Z] parsed scheme: "unix"                         module=grpc
INFO[2021-10-13T13:16:34.127298955Z] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2021-10-13T13:16:34.127316820Z] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}  module=grpc
INFO[2021-10-13T13:16:34.127325640Z] ClientConn switching balancer to "pick_first"  module=grpc
DEBU[2021-10-13T13:16:34.129028389Z] Using default logging driver json-file       
DEBU[2021-10-13T13:16:34.129157612Z] [graphdriver] priority list: [btrfs zfs overlay2 aufs overlay devicemapper vfs]
DEBU[2021-10-13T13:16:34.129704732Z] processing event stream                       module=libcontainerd namespace=plugins.moby
ERRO[2021-10-13T13:16:34.130595768Z] AUFS was not found in /proc/filesystems       storage-driver=aufs
ERRO[2021-10-13T13:16:34.130645878Z] [graphdriver] prior storage driver aufs failed: driver not supported
DEBU[2021-10-13T13:16:34.130980119Z] Cleaning up old mountid : start.             
failed to start daemon: error initializing graphdriver: driver not supported
 

olidal

New Member
Mar 26, 2021
5
1
3
I am stumbling on the same issue here:

it turns that the Ubuntu kernel on which the PVE 7 kernel is based has stopped distributing the aufs-dkms package and recommends using the upstream supported overlay fs instead:

https://www.debian.org/releases/bul...ormation.en.html#noteworthy-obsolete-packages

This is bad because the two are NOT equivalent. Especially, overlay fs does not work with zfs.
Even though AUFS is still actively supported.

I am currently trying to recompile a 5.11 kernel with the aufs-dkms module and I wouldnt mind a bit of help for that.

Is the proxmox kernel using the stock ubuntu kernel ? (Impish ?)
If not, where can I find the sources (and patches) of the latest PVE 7 kernel?

thanks for any help.
 

olidal

New Member
Mar 26, 2021
5
1
3
I am stumbling on the same issue here:

it turns that the Ubuntu kernel on which the PVE 7 kernel is based has stopped distributing the aufs-dkms package and recommends using the upstream supported overlay fs instead:

https://www.debian.org/releases/bul...ormation.en.html#noteworthy-obsolete-packages

This is bad because the two are NOT equivalent. Especially, overlay fs does not work with zfs.
Even though AUFS is still actively supported.

I am currently trying to recompile a 5.11 kernel with the aufs-dkms module and I wouldnt mind a bit of help for that.

Is the proxmox kernel using the stock ubuntu kernel ? (Impish ?)
If not, where can I find the sources (and patches) of the latest PVE 7 kernel?

thanks for any help.
I have made progress on this issue.

It seems that (the Great! ) Proxmox folks have simply not selected the aufs module when they recompiled the ubuntu kernel their way.

However, my previous statement was partly wrong: even though Debian has announced that they will stop distributing AUFS, the module is still included in the Ubuntu kernel on which the proxmox kernel is built, and the module can be selected as module apparently with no issue.

In order to test I have been compiling kernels the whole week-end, and finally managed to recompile the Ubuntu-5.11.0-41.45 kernel that is used as base for the pve-kernel-5.11.22-7 package, using the same config (as found in /boot), but with the addition of the aufs module AND ZFS (the latter is a bit tricky, the only solution I found was to compile ZFS as a separate project after recompiling the Ubuntu kernel, and then inject and repackage the zfs module(s) in the kernel .deb package).

After installing the recompiled kernel in my PVE7 instance, I still had to create an initrd using update-initramfs and then install the new kernel+initrd using the proxmox-boot-tool.

I succesfully rebooted my zfs-rooted PVE7 on this new kernel and was able to load the AUFS module. I dindt test for long but it seems to be perfectiy working.

Even though I believe I could automate the process, eg. using ansible, this seems awfully complicated.

Hence my question: is there a particular reason why the AUFS module was not selected in the latest kernel package or is this just an unfortunate mistake?

And in the latter case, would you mind re-including this module? (Please :) )

Thanks in advance!
Olivier
 
Last edited:
  • Like
Reactions: elBradford

kamzata

Active Member
Jan 21, 2011
167
7
38
Venezia - Italy
Any news on this? I'm trying to use Docker on a LXC Container (Ubuntu 20.04 LTS) on Proxmox 7.0-11 with ZFS filesystem. It seems to work using "nesting" and "keyctl" options but the disk space grows exponentially and becomes unmanageable. Basically it's not usable.
 

elBradford

Member
Sep 9, 2016
16
3
23
bradford.la
I have made progress on this issue.

It seems that (the Great! ) Proxmox folks have simply not selected the aufs module when they recompiled the ubuntu kernel their way.

However, my previous statement was partly wrong: even though Debian has announced that they will stop distributing AUFS, the module is still included in the Ubuntu kernel on which the proxmox kernel is built, and the module can be selected as module apparently with no issue.

In order to test I have been compiling kernels the whole week-end, and finally managed to recompile the Ubuntu-5.11.0-41.45 kernel that is used as base for the pve-kernel-5.11.22-7 package, using the same config (as found in /boot), but with the addition of the aufs module AND ZFS (the latter is a bit tricky, the only solution I found was to compile ZFS as a separate project after recompiling the Ubuntu kernel, and then inject and repackage the zfs module(s) in the kernel .deb package).

After installing the recompiled kernel in my PVE7 instance, I still had to create an initrd using update-initramfs and then install the new kernel+initrd using the proxmox-boot-tool.

I succesfully rebooted my zfs-rooted PVE7 on this new kernel and was able to load the AUFS module. I dindt test for long but it seems to be perfectiy working.

Even though I believe I could automate the process, eg. using ansible, this seems awfully complicated.

Hence my question: is there a particular reason why the AUFS module was not selected in the latest kernel package or is this just an unfortunate mistake?

And in the latter case, would you mind re-including this module? (Please :) )

Thanks in advance!
Olivier
This is great. Hope a staff member responds, especially since you did all of the troubleshooting for them...
 

Neuer_User

Member
Jan 5, 2016
14
0
21
54
I stumbled over the same problem when upgrading my Proxmox 6 installation.
I will now try to recompile the kennel to see if I get this working again.
 

styx-tdo

Member
Mar 28, 2010
6
0
21
Upvote from me. My vaultwarden did not like 7.1. At all... :/

Please add this back in the official kernel
 

Neuer_User

Member
Jan 5, 2016
14
0
21
54
I have made progress on this issue.

It seems that (the Great! ) Proxmox folks have simply not selected the aufs module when they recompiled the ubuntu kernel their way.

However, my previous statement was partly wrong: even though Debian has announced that they will stop distributing AUFS, the module is still included in the Ubuntu kernel on which the proxmox kernel is built, and the module can be selected as module apparently with no issue.

In order to test I have been compiling kernels the whole week-end, and finally managed to recompile the Ubuntu-5.11.0-41.45 kernel that is used as base for the pve-kernel-5.11.22-7 package, using the same config (as found in /boot), but with the addition of the aufs module AND ZFS (the latter is a bit tricky, the only solution I found was to compile ZFS as a separate project after recompiling the Ubuntu kernel, and then inject and repackage the zfs module(s) in the kernel .deb package).

After installing the recompiled kernel in my PVE7 instance, I still had to create an initrd using update-initramfs and then install the new kernel+initrd using the proxmox-boot-tool.

I succesfully rebooted my zfs-rooted PVE7 on this new kernel and was able to load the AUFS module. I dindt test for long but it seems to be perfectiy working.

Even though I believe I could automate the process, eg. using ansible, this seems awfully complicated.

Hence my question: is there a particular reason why the AUFS module was not selected in the latest kernel package or is this just an unfortunate mistake?

And in the latter case, would you mind re-including this module? (Please :) )

Thanks in advance!
Olivier
Are you sure about that? The proxmox kernel builds on top of ubuntu-impish. Looking at the kernel tree of the ubuntu-impish kernel 5.13, I do not see any aufs sources in the tree:

https://kernel.ubuntu.com/git/ubuntu/ubuntu-impish.git/tree/fs

To me that looks as if the fault is with ubuntu. I do not see a way to "simply reselect the module and have it built". Maybe we need to build the module out of tree using the module source and the kernel headers?

P.S.: I switched to the 5.11 kernel branch and there, indeed, are the aufs sources in the tree. So, your analysis fits for the 5.11 kernel, but unfortunately not for the 5.13.
 
Last edited:

Neuer_User

Member
Jan 5, 2016
14
0
21
54
As aufs cannot easily be compiled as a module (it needs several patches to the whole kernel), I gave up with compiling my own kernel module.
I found a workaround, which would work, but which I not really like (using ext4 for the docker volumes). Therefore, I will downgrade back to Proxmox 6.4 and hope that there will be a better solution next year before EOL of Proxmox 6.
 

Neuer_User

Member
Jan 5, 2016
14
0
21
54
P.S.: Did anyone test running a 5.4 kernel (from the Proxmox 6 series) on Proxmox 7.1 ? Maybe that could also be a viable way until there is a final solution?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!