Restore of LXC fails v

Feb 4, 2024
54
4
8
encountered unexpected error during extraction: error at entry "e_spyw.i12": failed to extract file: encountered size mismatch: expected 337336, got 337268

Tried it from 2 different PBS- on both the verified status is Green. Also did this twice now that i created the same docker container inside it and they both fail. the only thing they have in common is IP and Name which is called axigen.

The strange thing is taht this LXC was copied (with unique button ticked) from another LXC that i can restore without probs. so the only difference between those 2 LXC is the static IP which is used and also the container that runs inside it.

any help appreciated.
 

Attachments

Hi,
encountered unexpected error during extraction: error at entry "e_spyw.i12": failed to extract file: encountered size mismatch: expected 337336, got 337268
this indicates that the metadata archive and the payload archive disagree on the files payload size. So the archives are not consistent and the restore therefore fails. This is however an issue on the pxar level, not the chunk level, therefore verification with PBS cannot detect this. The question is how it got corrupted to begin with. Could you post the task log for the corresponding backup task?

What about the previous backup snapshot which is used as reference for the change detection mode metadata? Does that one restore? Also, what is the content you are backing up here, e_spyw.i12 is related to Windows Bitdefender according to a quick online search?

Also did this twice now that i created the same docker container inside it and they both fail.
What do you mean here exactly? Could you elaborate what docker container and how this is related to the LXC?
 
Hi Chris, i run docker on top of the LXC Container.

so i have a master lxc container which i use to duplicte with unique button ticked and then change name, IP and the docker-compose file to run the container i would like to run.

so from that master i deployed the same docker-compose file and both times the restore fails. for another application i used the master file it just works fine.

but OK thank you for the hint i guess now i understand:

What about the previous backup snapshot which is used as reference for the change detection mode metadata? Does that one restore? Also, what is the content you are backing up here, e_spyw.i12 is related to Windows Bitdefender according to a quick online search?

Axigen is a Linux Mail solution that uses Bitdefender to scan mails, so it seems that bitdefender (which runs insde the app which runs inside the docker container) breaks the backup?

that would explain why it consequently fails as i deploy the same application. is that a known problem that an AV solution cant be run inside LXC?

About backup - Could you post the task log for the corresponding backup task, that runs inside a Backup for all VMs and LXCs, how can i retrieve the log when i do an individual backup?
 
so i have a master lxc container which i use to duplicte with unique button ticked and then change name, IP and the docker-compose file to run the container i would like to run.
We do recommend against such setups (Docker inside LXC), as they are rather error prone. Nevertheless, this does not explain why your archive got corrupted.

so from that master i deployed the same docker-compose file and both times the restore fails. for another application i used the master file it just works fine.
This I do not understand, how is restoring the LXC on PVE related to running docker-compose? Or do you mean you do a clone of the container and then run the docker compose, before backing up the LXC, and then the restore fails...?

Please provide the exact steps you performed in order to try and reproduce this issue, thanks!

Please provide the PVE backup task log (you can select the task and click on download to get it) with which the backup snapshot was created and also try restoring the snapshot in the same group just before the one that fails. Does that fail as well? Was this backup group empty before or did it already contain backup snapshots?

Also, please provide the output of pveversion -v.

Axigen is a Linux Mail solution that uses Bitdefender to scan mails, so it seems that bitdefender (which runs insde the app which runs inside the docker container) breaks the backup?

that would explain why it consequently fails as i deploy the same application. is that a known problem that an AV solution cant be run inside LXC?
So far there were no reports regarding this which I'm aware of.
 
Last edited:
"We do recommend against such setups (Docker inside LXC), as they are rather error prone. Nevertheless, this does not explain why your archive got corrupted. " --> you mean here you do rather !not! recommend? well generally i am on a 6 node cluster with 20 LXC Containers and so far really havent had any problem, running productive since Feb24 coming from 8.1. now on 8.3. But i get your general point, thinking of k8s cluster now but not sure if is worth the extra complexity.

-->coming back to the backup problem, so i suspected here the new version 8.3 / 3.3 (or new metadata backup) but also on the other lxcs restores works fine so with your hint i really suspect to be a very specific problem in conjunction with bitdefender inside the app

Log of Dump is attached.

root@pve6:~# pveversion -v
proxmox-ve: 8.3.0 (running kernel: 6.8.12-4-pve)
pve-manager: 8.3.0 (running version: 8.3.0/c1689ccb1065a83b)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.12-4
proxmox-kernel-6.8.12-4-pve-signed: 6.8.12-4
proxmox-kernel-6.8.12-2-pve-signed: 6.8.12-2
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
ceph: 19.2.0-pve2
ceph-fuse: 19.2.0-pve2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.4
libpve-access-control: 8.2.0
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.10
libpve-cluster-perl: 8.0.10
libpve-common-perl: 8.2.9
libpve-guest-common-perl: 5.1.6
libpve-http-server-perl: 5.1.2
libpve-network-perl: 0.10.0
libpve-rs-perl: 0.9.0
libpve-storage-perl: 8.2.9
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.5.0-1
proxmox-backup-client: 3.2.9-1
proxmox-backup-file-restore: 3.2.9-1
proxmox-firewall: 0.6.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.3.1
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.1
pve-cluster: 8.0.10
pve-container: 5.2.2
pve-docs: 8.3.1
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.2
pve-firewall: 5.1.0
pve-firmware: 3.14-1
pve-ha-manager: 4.0.6
pve-i18n: 3.3.1
pve-qemu-kvm: 9.0.2-4
pve-xtermjs: 5.3.0-3
qemu-server: 8.3.0
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.6-pve1
 

Attachments

for comparison i disabled the bitdefender service inside the app, and made a backup - here is the log. i also tried a restore and now it works.



Task viewer: CT 120 - Restore

OutputStatus

Stop

Download
recovering backed-up configuration from 'PBS1:backup/ct/116/2024-12-08T17:12:33Z'
/dev/rbd13
Creating filesystem with 5242880 4k blocks and 1310720 inodes
Filesystem UUID: 6360fe0b-4658-4632-be2c-7dccbf6e4d6f
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
restoring 'PBS1:backup/ct/116/2024-12-08T17:12:33Z' now..
merging backed-up and given configuration..
TASK OK
 

Attachments

cool - thank you ver y much for fast Help - much appreciated. Proxmox ist great :) i am glad i did the switch from vsphere.

And business from austria to austria/EU is also not bad either.. doenst need to be US software all the time!
 
proxmox-backup-client in version 3.3.2-1 includes the patch and is available on the no-subscription repo at the time of writing. Please check if this fixes your issue. To be on the safe side, make sure to prune any backup snapshot which fails a restore so that you do not use such a previous snapshot as reference.
 
"We do recommend against such setups (Docker inside LXC), as they are rather error prone. Nevertheless, this does not explain why your archive got corrupted. " --> you mean here you do rather !not! recommend? well generally i am on a 6 node cluster with 20 LXC Containers and so far really havent had any problem, running productive since Feb24 coming from 8.1. now on 8.3. But i get your general point, thinking of k8s cluster now but not sure if is worth the extra complexity.

This subject was discussed several times, the trouble with docker inside containers is that they are more likely to have problems (like breaking after an update) compared to containers inside a vm. You don't need to use k8s for that, any Linux with docker or podman and maybe a managment interface like portainer would do. One example from the German section:
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!