docker: failed to register layer: ApplyLayer exit status 1 stdout: stderr: unlinkat /var/log/apt: invalid argument.

ericfrol

New Member
Dec 23, 2022
2
1
3
Hello,
Since I updated to Proxmox 7.3-4, I'm getting the following error when pulling some docker images, inside LXC containers.
Unable to find image 'linuxserver/plex:latest' locally
latest: Pulling from linuxserver/plex
274402f9efdb: Pull complete
cbba887b2540: Pull complete
a15ce2a609e0: Pull complete
2f5c2978749a: Extracting [==================================================>] 13.6MB/13.6MB
96464c4b8240: Download complete
a58548592b7a: Download complete
635112d11240: Download complete
docker: failed to register layer: ApplyLayer exit status 1 stdout: stderr: unlinkat /var/log/apt: invalid argument.
See 'docker run --help'.
What should I do? This worked fine before updating.
 
Last edited:
  • Like
Reactions: danielo515
I'm having the exact same problem.

I have docker installed in a Debian LXC container. I updated Proxmox, rebooted the node, all of the images and containers in that docker instance were gone. I've recreated all of them except 1 that gives that same error.

docker pull pihole/pihole:latest
latest: Pulling from pihole/pihole
025c56f98b67: Pull complete
09a66d9e4ff9: Pull complete
4f4fb700ef54: Pull complete
e891d03604b7: Pull complete
5b31b5f427ff: Pull complete
56c511b00b04: Pull complete
46f0ee578a6b: Extracting [==================================================>] 29.9MB/29.9MB
0ff0b7d74c26: Download complete
aeb758985c79: Download complete
failed to register layer: ApplyLayer exit status 1 stdout: stderr: unlinkat /var/cache/apt/archives: invalid argument

I've spent hours googling and trying things with no success. I'm close to just blowing away the LXC container and starting again.
 
  • Like
Reactions: danielo515
Good news!
I got it working by using a privileged container, but that’s not a “secure” solution…
 
Hey,
I had the same issue today - My fix work as well!
Switch back to VFS storage:
https://docs.docker.com/storage/storagedriver/vfs-driver/

Stop docker service
Bash:
systemctl stop docker

Create or edit docker daemon config
Bash:
nano /etc/docker/daemon.json

Add storage-driver:
JSON:
{
  "storage-driver": "vfs"
}

And start docker
Bash:
systemctl start docker




Found after some troubleshooting and help with this post:
https://forum.proxmox.com/threads/t...-update-has-really-messed-up-my-boxes.119933/
 
Last edited:
Thank you this fixed it for me as well!
Hey,
I had the same issue today - My fix work as well!
Switch back to VFS storage:
https://docs.docker.com/storage/storagedriver/vfs-driver/

Stop docker service
Bash:
systemctl stop docker

Create or edit docker daemon config
Bash:
nano /etc/docker/daemon.json

Add storage-driver:
JSON:
{
  "storage-driver": "vfs"
}

And start docker
Bash:
systemctl start docker




Found after some troubleshooting and help with this post:
https://forum.proxmox.com/threads/t...-update-has-really-messed-up-my-boxes.119933/
ou
 
Just for completeness sake - We don't recommend running docker inside of a container (precisely because it causes issues upon upgrades of Kernel, LXC, Storage packages) - I would install docker inside of a Qemu VM as this has fewer interaction with the host system and is known to run far more stable
 
Do note by using VFS driver your `/var/lib/docker` will start to eat disk space like crazy.

This is true, yet presently I don't know of a workaround. I saw some feedback from the 6.x kernel saying it fixed the issue, I gave it a try, yet still the same error on specific containers, for example NGINX Proxy Manager.
 
VFS is just workaround to test where the issue is. It is completely unusable for production due lack of union FS (simply: kind of layers deduplication). Here it is described: How the vfs storage driver works.

When LXC is created with defaults, it uses host's filesystem by bind mount. I.e. for ZFS it detects that FS is ZFS but cannot use all magic features due to permissions (unprivileged LXC).
My workaround for this is create LXC storage on Proxmox's ''Directory'' type storage. Choosing ''Directory'' type storage forces Proxmox to create .raw file and mount it inside container using loop block device with ext4 filesystem. And Docker works well with ext4.

LXC has lot of advantages over VMS (i.e. RAM allocation). I have small farm of unprivileged LXC's with running Docker daemon. On each LXC gitlab-runner Docker container is running and provides "docker executor" for Gitlab CI system. Everything works well for 2 years. Of course it is not possible to execute some tricky things like running Ubuntu distro (using systemd-nspawn) during CI job but for this I have another dedicated VMs.

Mounts inside LXC with running Docker:
Code:
/nvmpool/runners-dir/images/810/vm-810-disk-1.raw on /var/lib/docker type ext4 (rw,noatime)
And df:
Code:
Filesystem                      Size  Used Avail Use% Mounted on
hddpool/data/subvol-810-disk-0  8,0G  2,6G  5,5G  33% /
/dev/loop0                       49G   16G   32G  33% /var/lib/docker
overlay                          49G   16G   32G  33% /var/lib/docker/overlay2/5bed1ebf26856bbf1f4b6c06a706b2c4d2d4752afadf1043dadd36fd1c196cbc/merged
Docker info:
Code:
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: false
  userxattr: true
 
Last edited:
  • Like
Reactions: 0xAF
Same problem here, just updated from 7.2.7 to 7.3.6
Having an LXC container dedicated to Frigate, running as a docker.
Got the error from docker-compose :
"
ERROR: failed to register layer: ApplyLayer exit status 1 stdout: stderr: unlinkat /usr/lib/locale/C.UTF-8/LC_MESSAGES: invalid argument
"
 
I've found a few more things to play around with since the last time I posted in this thread.

1. Use fuse-overlayfs, you need to install the package via apt on the guest first. You can then alter the storage driver in the daemon config. Shut down the CT and change the box in options to enable fuse. I'm not a huge fan of user space mods, yet it does seem to work. Verify data is being written to the new /var/lib/docker/fuse-overfs dirctory. When your dockers are back up, (you saved your compose files, correct?) you can delete the VFS directory in the same location, and you will see a drastic reduction in space. I haven't measured performance, yet it should be more efficient. The problem I've found here is that backups don't seem to work 100% with this configuration. It skips much of the /var/lib/docker/fuse-overlayfs directory. I believe there are ways around this, yet they include shell script and manual invocation of the backups. I prefer to keep in all working in the GUI

2. Create a mount point on the CT that's formatted in ext4 that will hold your docker files. There are instructions for this in the forum, yet I'm having no luck finding the thread. I never got it to work 100% I am unsure as to what functionality you lose. User niziak posted their solution a few posts back in this thread, it seems like a similar solution and is something to try. In fact, I may try it as my next attempt.

3. Stop using docker in LXC's. This is what Proxmox proposes, as they say they don't support docker inside of Linux containers. So, you'd either create a vm (probably the best solution) or move Docker to the host (you'll lose pretty much all the backups/cloning in the UI this way). The downside here is if you're using a shared CPU for LXC's it will no longer work, you need to pass through the CPU to the docker container exclusively.

The exception is if you run a datacenter or workstation card. You can pick up some of the older ones rather cheaply on eBay. Then you'd have to purchase the NVIDIA drivers for virtual GPU, or wait until someone makes an open-source solution. There was a hack available for consumer cards, yet I'm not sure if this still works. It also still requires the purchase of the drivers. You can use the NVIDIA drivers for 90 days for free, yet I haven't encountered a workaround for this time bomb as yet.

So, while I am yet to find the perfect solution this thread seems to get several views, so I'd invite feedback on what other solutions folks have found to this quandary, it should help many who land here via Google.
 
Last edited:
  • Like
Reactions: Helmut101
I was getting similar errors too trying to install frigate in an unprivileged LXC. LXC is on ZFS, Docker is using overlay2
Code:
>sudo docker-compose up --detac[+] Running 12/14
 ⠏ frigate 13 layers [⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿]   424MB/424MB   Pulling                                                                           59.9s
   ✔ bd159e379b3b Pull complete                                                                                                         4.2s
   ✔ 487eaf932a7c Pull complete                                                                                                        29.4s
   ⠿ 06eb2e1cb05d Extracting      [==================================================>]    424MB/424MB                                 57.0s
   ✔ cd6c397f173b Download complete                                                                                                     4.9s
   ✔ 0ce6904b4918 Download complete                                                                                                     5.9s
   ✔ e2bf4b8c99f4 Download complete                                                                                                     7.9s
   ✔ 1ebe9be94477 Download complete                                                                                                     9.1s
   ✔ f6056943b5e0 Download complete                                                                                                     9.9s
   ✔ 7a2a03e8f2f8 Download complete                                                                                                    10.8s
   ✔ 61a6cbb91f3e Download complete                                                                                                    11.6s
   ✔ 8ebd1adf58b6 Download complete                                                                                                    12.5s
   ✔ e9b2937738f3 Download complete                                                                                                    13.5s
   ✔ f746b6f69bdd Download complete                                                                                                    14.8s
failed to register layer: ApplyLayer exit status 1 stdout:  stderr: unlinkat //wheels: invalid argument

Found this post which indicates that error seems to be caused by uid/gids being too high in the package docker is pulling down: https://github.com/nextcloud/all-in-one/discussions/1490#discussioncomment-5383931

I ended up just building the docker image locally in the LXC itself and was able to run frigate that way.

Another solution might be to change the number of mapped uids from the default to something really high, but I haven't tested it yet.
 
By default only user IDs 0 to 65536 are mapped. You can see those values in /etc/subuid and /etc/subgid:

>cat /etc/subuid
root:100000:65536

However I believe the uid/gid's in the problematic docker images showed as being in the 9 digit range (e.g. https://gitlab.com/Shinobi-Systems/Shinobi/-/issues/452) so not sure if setting such a large range would work or is even valid.
 
Hitting the same issue with Home Assistant image pull, but now I am a little bit confused.

With docker info I can see that:
Code:
 Storage Driver: overlay2
  Backing Filesystem: zfs

And according to https://github.com/docker/for-linux/issues/1410 and https://github.com/openzfs/zfs/pull/9414 it seems that overlay2 is supported on zfs now?
Yea, home-assistant is problematic with an unprivileged container using zfs, but the lsio home-assistant image works. In fact, I've never had a problem with any lsio image.
 
apparently it's error out on pull but once you pulled (by using the ext4 mount approach), you could rsync to a zfs /var/lib/docker and it's just working fine
 
apparently it's error out on pull but once you pulled (by using the ext4 mount approach), you could rsync to a zfs /var/lib/docker and it's just working fine
Although I appreciate the suggestion, that’s just not really an ideal scenario, especially if using something like watchtower to update your containers. At that point, it just seems like one too many hoops to jump through.
 
Last edited:
Although I appreciate the suggestion, that’s just not really an ideal scenario, especially if using something like watchtower to update your containers. At that point, it just seems like one too many hoops to jump through.
wasn't trying to suggest anything, it's just some interesting finding
 
Hey,
I had the same issue today - My fix work as well!
Switch back to VFS storage:
https://docs.docker.com/storage/storagedriver/vfs-driver/

Stop docker service
Bash:
systemctl stop docker

Create or edit docker daemon config
Bash:
nano /etc/docker/daemon.json

Add storage-driver:
JSON:
{
  "storage-driver": "vfs"
}

And start docker
Bash:
systemctl start docker




Found after some troubleshooting and help with this post:
https://forum.proxmox.com/threads/t...-update-has-really-messed-up-my-boxes.119933/
Thank you very much!! <3
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!