Backup/ Restore failing

Dietmar / All,

My issue seems to be resolved. The culprit - memory. I was using Mushkin quad-channel memory on an ASROCK Extreme 4 mobo. As I started to notice more insconsistency problems when trying to install new VMs, I did extensive memory tests (Memtest +). The memory failed at different tests (primarily test 6). So, I ordered some quad-channel Kingston memory, tested it with Memtest +, and all appears to be resolved. I can successfully back up and restore with LZO format. Thanks and good luck with your resolutions.
 
So I finally managed to get the backup working, we had to do a bios upgrade. Maybe the current bios had some mem issues.
 
Hi All,
I am experiencing this issue or similar with LZO backup on SMB share for VM backups > 25 GB.
I backed up 6 VMs on the same SMB share.
They all backed up successfully both in the log and GUI.
I also a successfully restored of one of the smaller VM before wiping the server and installing of PVE 6
The issue now is I can only restore three VM backups < 25 GBs each.
All other VM backups above 25GB in size fails with the checksum error.
I have reinstalled 5.4, but same issues.
I don't think it's an issue with the SMB share because I can restore 50% of my VMs from it.
only thing I can think of is the size difference.
I have also attempted restore on four different server configurations and copied the backup files (.vma.lzo) locally but no luck.
The error I've been getting:
Code:
progress 30% (read 322122547200 bytes, duration 65 sec)
progress 31% (read 332859965440 bytes, duration 398 sec)
progress 32% (read 343597383680 bytes, duration 418 sec)
lzop: /var/lib/vz/dump/vzdump-qemu-102-2019_08_04-12_20_52.vma.lzo: Checksum error

** (process:7622): ERROR **: 23:22:25.189: restore failed - short vma extent (1236480 < 3801600)
 
I am experiencing this issue or similar with LZO backup on SMB share for VM backups > 25 GB.

Next time, please create a new thread and do not resurrect a thread that is older than 6 years!

I don't think it's an issue with the SMB share because I can restore 50% of my VMs from it.

50% is still very bad success rate. What about disk issues on the server that is hosting the share?

The issue now is I can only restore three VM backups < 25 GBs each.

That is very sad. Do you have older backups?

In general:
A successful backup is one that has been tested, which means a restore or an extraction of the backup file.
 
Next time, please create a new thread and do not resurrect a thread that is older than 6 years!



50% is still very bad success rate. What about disk issues on the server that is hosting the share?



That is very sad. Do you have older backups?

In general:
A successful backup is one that has been tested, which means a restore or an extraction of the backup file.

Newbie here. I don't know the rules yet.
I didn't want to create a duplicate. But I will definitely create a new thread another time.

No known disk issues. Although it's a windows server and doesn't report like a ZFS pool would have done.
I just wanted to know that I am not doing anything wrong with the restore process.
These were the only backups that I have.

Yes. I will always test all my backups, and potentially not use any compression.

Thanks,
Ola
 
I just wanted to know that I am not doing anything wrong with the restore process.

Normally, it is dead simple. If you have a checksum error, you normally cannot recover from that.

I have made literally thousands of backups over the years and cannot recall any problems restoring it, so I do not think that the problem lies in the code itself. I restore every backup as a test to a ZFS based backup server.

These were the only backups that I have.

That's really a pity.
 
Normally, it is dead simple. If you have a checksum error, you normally cannot recover from that.

I have made literally thousands of backups over the years and cannot recall any problems restoring it, so I do not think that the problem lies in the code itself. I restore every backup as a test to a ZFS based backup server.



That's really a pity.

I am rebuilding the VMs.
thanks,
Ola
 
restore vma archive: zcat /media/usb/dump/vzdump-qemu-100-2019_10_24-13_06_34.vma.gz | vma extract -v -r /var/tmp/vzdumptmp3761.fifo - /var/tmp/vzdumptmp3761
CFG: size: 419 name: qemu-server.conf
DEV: dev_id=1 size: 34359738368 devname: drive-sata0
CTIME: Thu Oct 24 13:06:36 2019
new volume ID is 'local-zfs:vm-100-disk-0'
map 'drive-sata0' to '/dev/zvol/rpool/data/vm-100-disk-0' (write zeros = 0)
progress 1% (read 343605248 bytes, duration 21 sec)
progress 2% (read 687210496 bytes, duration 51 sec)
progress 3% (read 1030815744 bytes, duration 90 sec)
progress 4% (read 1374420992 bytes, duration 124 sec)
progress 5% (read 1718026240 bytes, duration 157 sec)
progress 6% (read 2061631488 bytes, duration 192 sec)
progress 7% (read 2405236736 bytes, duration 239 sec)
progress 8% (read 2748841984 bytes, duration 286 sec)
progress 9% (read 3092381696 bytes, duration 329 sec)
progress 10% (read 3435986944 bytes, duration 395 sec)
progress 11% (read 3779592192 bytes, duration 456 sec)
progress 12% (read 4123197440 bytes, duration 504 sec)
progress 13% (read 4466802688 bytes, duration 540 sec)

** (process:3765): ERROR **: 20:19:44.103: restore failed - wrong vma extent header chechsum
/bin/bash: line 1: 3764 Broken pipe zcat /media/usb/dump/vzdump-qemu-100-2019_10_24-13_06_34.vma.gz
3765 Trace/breakpoint trap | vma extract -v -r /var/tmp/vzdumptmp3761.fifo - /var/tmp/vzdumptmp3761
temporary volume 'local-zfs:vm-100-disk-0' sucessfuly removed
no lock found trying to remove 'create' lock
TASK ERROR: command 'set -o pipefail && zcat /media/usb/dump/vzdump-qemu-100-2019_10_24-13_06_34.vma.gz | vma extract -v -r /var/tmp/vzdumptmp3761.fifo - /var/tmp/vzdumptmp3761' failed: exit code 133

Ayuda actualmente ocupo la versión de Proxmox 6.04