CT backups fails

Kaboom

Active Member
Mar 5, 2019
119
11
38
52
Hi There!

Some Container backups always fail, I also did a fsck but that doesn't seem to work.

Hereby the error:

INFO: starting new backup job: vzdump 176 --compress zstd --mode snapshot --storage local --node node003 --remove 0 INFO: filesystem type on dumpdir is 'zfs' -using /var/tmp/vzdumptmp2821969 for temporary files INFO: Starting Backup of VM 176 (lxc) INFO: Backup started at 2020-05-26 13:26:54 INFO: status = running INFO: CT Name: ivo INFO: backup mode: snapshot INFO: ionice priority: 7 INFO: create storage snapshot 'vzdump' /dev/rbd14 INFO: creating archive '/var/lib/vz/dump/vzdump-lxc-176-2020_05_26-13_26_54.tar.zst' INFO: tar: ./usr/lib/systemd/system/container-getty@.service: Cannot stat: Structure needs cleaning INFO: Total bytes written: 47884267520 (45GiB, 86MiB/s) INFO: tar: Exiting with failure status due to previous errors INFO: remove vzdump snapshot 2020-05-26 13:35:51.157 7f71e1ffb700 -1 librbd::object_map::InvalidateRequest: 0x7f71ec0032b0 should_complete: r=0 Removing snap: 100% complete...done. ERROR: Backup of VM 176 failed - command 'set -o pipefail && tar cpf - --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' --one-file-system '--warning=no-file-ignored' '--directory=/var/tmp/vzdumptmp2821969' ./etc/vzdump/pct.conf ./etc/vzdump/pct.fw '--directory=/mnt/vzsnap0' --no-anchored '--exclude=lost+found' --anchored '--exclude=./tmp/?*' '--exclude=./var/tmp/?*' '--exclude=./var/run/?*.pid' ./ | zstd --rsyncable '--threads=1' >/var/lib/vz/dump/vzdump-lxc-176-2020_05_26-13_26_54.tar.dat' failed: exit code 2 INFO: Failed at 2020-05-26 13:35:51 INFO: Backup job finished with errors TASK ERROR: job errors

What does this mean?

Thanks
 
hi,

INFO: tar: ./usr/lib/systemd/system/container-getty@.service: Cannot stat: Structure needs cleaning

it is most likely a filesystem/disk problem

have you shutdown the container before trying pct fsck CTID ? (better results when container isn't running)

if that still doesn't solve the issue, you can momentarily workaround it with the exclude-path option in /etc/vzdump.conf (ignoring certain paths during backup)

is this the only affected container? can you post the config?
 
Hi Oguz,

Yes I did a shutdown on this container first before running fsck and I have this problem one more containers.

Config: Ceph, Proxmox 6.2, containers running all Centos7, about 10 nodes with SSD. Do you need more info?

Maybe its a coincidence but it looks like backups will fail when it's more busy server, but that's a feeling.

Thanks
 
Hi,

I have the same problem zstd --rsyncable --threads=1 failed - wrong exit status 1

It's a single server with

pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.41-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-2
pve-kernel-helper: 6.2-2
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.41-1-pve: 5.4.41-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 2.0.1-1+pve8
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-1
pve-cluster: 6.1-8
pve-container: 3.1-6
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-2
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1

And kvm dump is made on nfs share.
 
What happens if you do a 'suspend' backup? In most of the cases this works for me, but this is not a good solution ofcourse (because the server will be offline for minutes).
 
I just try to backup vm with suspend option activate and smae error message
ERROR: Backup of VM 400 failed - zstd --rsyncable --threads=1 failed - wrong exit status 1
 
does workaround with exclude-path get it working?

is it always the same file(s) which cannot be backed up?
 
Hello,

My problem concern vms stored on zfs, or xfs and raw disk, on a single server with no cluster. On my cluster + ceph I have no problem with the new compression. On the two other server with no cluster zstd failed
 
Hello,

Now with the last update who include qemu-server, on server formated with xfs the backup file is create on nas but I have a new error message

Warning: unable to close filehandle GEN1566 properly: Input/output error at /usr/share/perl5/PVE/VZDump/QemuServer.pm line 598.
INFO: stopping kvm after backup task
INFO: archive file size: 62.73GB
INFO: Finished Backup of VM 416 (00:15:26)
INFO: Backup finished at 2020-06-09 16:38:52
INFO: Backup job finished successfully
TASK OK
 
but does the backup work? can you restore it normally?
 
To answer your question the restore freeze the server, cause by the nfs storage, that I force to unmount -l
 

Attachments

  • Capture d’écran de 2020-06-11 12-10-22.png
    Capture d’écran de 2020-06-11 12-10-22.png
    329.5 KB · Views: 38

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!