error 133 failed backup - unreliable backups?

lew-adv-sol

New Member
Feb 9, 2024
13
0
1
I am running into the restoration issue that I have not seen a full fix for except manual restoration everytime.

no lock found trying to remove 'create' lock
error before or during data restore, some or all disks were not completely restored. VM 102 state is NOT cleaned up.
TASK ERROR: command 'set -o pipefail && zstd -q -d -c /mnt/pve/SYN1/dump/vzdump-qemu-104-2025_05_06-14_25_56.vma.zst | vma extract -v -r /var/tmp/vzdumptmp27230.fifo - /var/tmp/vzdumptmp27230' failed: exit code 133

This is a fresh install. 3 nodes, 5 VMs were backed up to a synology(all were pretty new fresh so thankfully no major loss) along with 2 templates. Once I was backup and running, I attempted to restore the templates with no issue, the VMs refused with the error shown above. I have cleared the nodes or the synology and seem to be left with the backup process as the culprit...what can I possibly do? The idea of unreliable backups is a deal breaker and that makes me sad since I really wanted to work with proxmox.

Apologies for the short post, short on time. I will check in to answer any follow up questions as soon as I can. Thank you.
 
Last edited:
please post the full restore log (and ideally, also the corresponding backup task log)
 
  • Like
Reactions: lew-adv-sol
OK I will, I will also center my replies here since I see that you're active on my other posts. Thank you.
I am omw to work and will send them once I have a moment.
 
Backup task log:
Code:
2025-05-06 14:25:56 INFO: Starting Backup of VM 104 (qemu)
2025-05-06 14:25:56 INFO: status = stopped
2025-05-06 14:25:56 INFO: backup mode: stop
2025-05-06 14:25:56 INFO: ionice priority: 7
2025-05-06 14:25:56 INFO: VM Name: NTL-PC
2025-05-06 14:25:56 INFO: include disk 'scsi0' 'local-lvm:vm-104-disk-1' 100G
2025-05-06 14:25:56 INFO: include disk 'efidisk0' 'local-lvm:vm-104-disk-0' 4M
2025-05-06 14:25:56 INFO: include disk 'tpmstate0' 'local-lvm:vm-104-disk-2' 4M
2025-05-06 14:25:56 INFO: snapshots found (not included into backup)
2025-05-06 14:25:56 INFO: creating vzdump archive '/mnt/pve/SYN1/dump/vzdump-qemu-104-2025_05_06-14_25_56.vma.zst'
2025-05-06 14:25:56 INFO: starting kvm to execute backup task
2025-05-06 14:25:58 INFO: attaching TPM drive to QEMU for backup
2025-05-06 14:25:58 INFO: started backup task 'e5a8328b-6867-495a-9030-1fccec765fba'
2025-05-06 14:26:01 INFO:   0% (554.8 MiB of 100.0 GiB) in 3s, read: 184.9 MiB/s, write: 162.4 MiB/s
2025-05-06 14:26:04 INFO:   1% (1.2 GiB of 100.0 GiB) in 6s, read: 220.8 MiB/s, write: 220.7 MiB/s
2025-05-06 14:26:11 INFO:   2% (2.1 GiB of 100.0 GiB) in 13s, read: 135.3 MiB/s, write: 135.3 MiB/s
2025-05-06 14:26:16 INFO:   3% (3.0 GiB of 100.0 GiB) in 18s, read: 184.2 MiB/s, write: 184.2 MiB/s
2025-05-06 14:26:23 INFO:   4% (4.1 GiB of 100.0 GiB) in 25s, read: 158.1 MiB/s, write: 158.1 MiB/s
2025-05-06 14:26:30 INFO:   5% (5.1 GiB of 100.0 GiB) in 32s, read: 152.3 MiB/s, write: 152.3 MiB/s
2025-05-06 14:26:36 INFO:   6% (6.0 GiB of 100.0 GiB) in 38s, read: 151.2 MiB/s, write: 151.2 MiB/s
2025-05-06 14:26:43 INFO:   7% (7.1 GiB of 100.0 GiB) in 45s, read: 161.1 MiB/s, write: 161.1 MiB/s
2025-05-06 14:26:49 INFO:   8% (8.0 GiB of 100.0 GiB) in 51s, read: 152.3 MiB/s, write: 152.3 MiB/s
2025-05-06 14:26:56 INFO:   9% (9.1 GiB of 100.0 GiB) in 58s, read: 153.0 MiB/s, write: 153.0 MiB/s
2025-05-06 14:27:03 INFO:  10% (10.2 GiB of 100.0 GiB) in 1m 5s, read: 171.7 MiB/s, write: 171.7 MiB/s
2025-05-06 14:27:09 INFO:  11% (11.2 GiB of 100.0 GiB) in 1m 11s, read: 158.7 MiB/s, write: 158.7 MiB/s
2025-05-06 14:27:15 INFO:  12% (12.0 GiB of 100.0 GiB) in 1m 17s, read: 150.5 MiB/s, write: 150.5 MiB/s
2025-05-06 14:27:21 INFO:  13% (13.1 GiB of 100.0 GiB) in 1m 23s, read: 179.5 MiB/s, write: 179.5 MiB/s
2025-05-06 14:27:27 INFO:  14% (14.0 GiB of 100.0 GiB) in 1m 29s, read: 157.7 MiB/s, write: 157.7 MiB/s
2025-05-06 14:27:33 INFO:  15% (15.1 GiB of 100.0 GiB) in 1m 35s, read: 189.2 MiB/s, write: 189.2 MiB/s
2025-05-06 14:27:37 INFO:  16% (16.3 GiB of 100.0 GiB) in 1m 39s, read: 293.7 MiB/s, write: 293.7 MiB/s
2025-05-06 14:27:40 INFO:  17% (17.2 GiB of 100.0 GiB) in 1m 42s, read: 313.1 MiB/s, write: 313.1 MiB/s
2025-05-06 14:27:43 INFO:  18% (18.0 GiB of 100.0 GiB) in 1m 45s, read: 275.7 MiB/s, write: 275.7 MiB/s
2025-05-06 14:27:49 INFO:  19% (19.2 GiB of 100.0 GiB) in 1m 51s, read: 209.5 MiB/s, write: 209.5 MiB/s
2025-05-06 14:27:53 INFO:  20% (20.1 GiB of 100.0 GiB) in 1m 55s, read: 220.8 MiB/s, write: 214.6 MiB/s
2025-05-06 14:27:59 INFO:  21% (21.1 GiB of 100.0 GiB) in 2m 1s, read: 180.2 MiB/s, write: 180.1 MiB/s
2025-05-06 14:28:03 INFO:  22% (22.1 GiB of 100.0 GiB) in 2m 5s, read: 239.2 MiB/s, write: 239.2 MiB/s
2025-05-06 14:28:08 INFO:  23% (23.1 GiB of 100.0 GiB) in 2m 10s, read: 209.6 MiB/s, write: 189.5 MiB/s
2025-05-06 14:28:13 INFO:  24% (24.1 GiB of 100.0 GiB) in 2m 15s, read: 210.4 MiB/s, write: 190.8 MiB/s
2025-05-06 14:28:18 INFO:  25% (25.3 GiB of 100.0 GiB) in 2m 20s, read: 238.4 MiB/s, write: 159.8 MiB/s
2025-05-06 14:28:21 INFO:  26% (26.2 GiB of 100.0 GiB) in 2m 23s, read: 291.7 MiB/s, write: 157.2 MiB/s
2025-05-06 14:28:24 INFO:  27% (27.0 GiB of 100.0 GiB) in 2m 26s, read: 291.7 MiB/s, write: 184.0 MiB/s
2025-05-06 14:28:29 INFO:  28% (28.0 GiB of 100.0 GiB) in 2m 31s, read: 209.0 MiB/s, write: 179.3 MiB/s
2025-05-06 14:28:35 INFO:  29% (29.1 GiB of 100.0 GiB) in 2m 37s, read: 176.3 MiB/s, write: 151.5 MiB/s
2025-05-06 14:28:39 INFO:  30% (30.1 GiB of 100.0 GiB) in 2m 41s, read: 277.0 MiB/s, write: 190.5 MiB/s
2025-05-06 14:28:43 INFO:  31% (31.0 GiB of 100.0 GiB) in 2m 45s, read: 221.8 MiB/s, write: 216.2 MiB/s
2025-05-06 14:28:46 INFO:  38% (38.8 GiB of 100.0 GiB) in 2m 48s, read: 2.6 GiB/s, write: 21.3 MiB/s
2025-05-06 14:28:49 INFO:  46% (46.5 GiB of 100.0 GiB) in 2m 51s, read: 2.6 GiB/s, write: 0 B/s
2025-05-06 14:28:52 INFO:  54% (54.4 GiB of 100.0 GiB) in 2m 54s, read: 2.6 GiB/s, write: 0 B/s
2025-05-06 14:28:55 INFO:  62% (62.1 GiB of 100.0 GiB) in 2m 57s, read: 2.6 GiB/s, write: 0 B/s
2025-05-06 14:28:58 INFO:  69% (69.9 GiB of 100.0 GiB) in 3m, read: 2.6 GiB/s, write: 0 B/s
2025-05-06 14:29:01 INFO:  77% (77.8 GiB of 100.0 GiB) in 3m 3s, read: 2.6 GiB/s, write: 0 B/s
2025-05-06 14:29:04 INFO:  85% (85.6 GiB of 100.0 GiB) in 3m 6s, read: 2.6 GiB/s, write: 0 B/s
2025-05-06 14:29:07 INFO:  93% (93.6 GiB of 100.0 GiB) in 3m 9s, read: 2.7 GiB/s, write: 0 B/s
2025-05-06 14:29:10 INFO:  99% (99.5 GiB of 100.0 GiB) in 3m 12s, read: 2.0 GiB/s, write: 57.9 MiB/s
2025-05-06 14:29:12 INFO: 100% (100.0 GiB of 100.0 GiB) in 3m 14s, read: 235.8 MiB/s, write: 195.2 MiB/s
2025-05-06 14:29:12 INFO: backup is sparse: 70.41 GiB (70%) total zero data
2025-05-06 14:29:12 INFO: transferred 100.00 GiB in 194 seconds (527.9 MiB/s)
2025-05-06 14:31:07 INFO: stopping kvm after backup task
2025-05-06 14:31:11 INFO: archive file size: 29.54GB
2025-05-06 14:31:11 INFO: adding notes to backup
2025-05-06 14:31:12 INFO: Finished Backup of VM 104 (00:05:16)


Restore task log:
Code:
restore vma archive: zstd -q -d -c /mnt/pve/SYN1/dump/vzdump-qemu-102-2025_05_06-14_33_40.vma.zst | vma extract -v -r /var/tmp/vzdumptmp24747.fifo - /var/tmp/vzdumptmp24747
CFG: size: 1189 name: qemu-server.conf
DEV: dev_id=1 size: 540672 devname: drive-efidisk0
DEV: dev_id=2 size: 137438953472 devname: drive-scsi0
DEV: dev_id=3 size: 274877906944 devname: drive-scsi1
DEV: dev_id=4 size: 4194304 devname: drive-tpmstate0-backup
CTIME: Tue May  6 14:33:41 2025
  Rounding up size to full physical extent 4.00 MiB
  Logical volume "vm-104-disk-0" created.
new volume ID is 'local-lvm:vm-104-disk-0'
  Logical volume "vm-104-disk-1" created.
new volume ID is 'local-lvm:vm-104-disk-1'
  Logical volume "vm-104-disk-2" created.
new volume ID is 'local-lvm:vm-104-disk-2'
  Logical volume "vm-104-disk-3" created.
new volume ID is 'local-lvm:vm-104-disk-3'
map 'drive-efidisk0' to '/dev/pve/vm-104-disk-0' (write zeros = 0)
map 'drive-scsi0' to '/dev/pve/vm-104-disk-1' (write zeros = 0)
map 'drive-scsi1' to '/dev/pve/vm-104-disk-2' (write zeros = 0)
map 'drive-tpmstate0-backup' to '/dev/pve/vm-104-disk-3' (write zeros = 0)
progress 1% (read 4123262976 bytes, duration 0 sec)
progress 2% (read 8246460416 bytes, duration 1 sec)
progress 3% (read 12369657856 bytes, duration 1 sec)
progress 4% (read 16492920832 bytes, duration 1 sec)
progress 5% (read 20616118272 bytes, duration 1 sec)
progress 6% (read 24739315712 bytes, duration 1 sec)
progress 7% (read 28862513152 bytes, duration 2 sec)
progress 8% (read 32985776128 bytes, duration 2 sec)
progress 9% (read 37108973568 bytes, duration 2 sec)
progress 10% (read 41232171008 bytes, duration 2 sec)
progress 11% (read 45355433984 bytes, duration 3 sec)
progress 12% (read 49478631424 bytes, duration 3 sec)
progress 13% (read 53601828864 bytes, duration 3 sec)
progress 14% (read 57725026304 bytes, duration 3 sec)
progress 15% (read 61848289280 bytes, duration 4 sec)
progress 16% (read 65971486720 bytes, duration 4 sec)
progress 17% (read 70094684160 bytes, duration 4 sec)
progress 18% (read 74217947136 bytes, duration 4 sec)
progress 19% (read 78341144576 bytes, duration 5 sec)
progress 20% (read 82464342016 bytes, duration 5 sec)
progress 21% (read 86587539456 bytes, duration 5 sec)
progress 22% (read 90710802432 bytes, duration 5 sec)
progress 23% (read 94833999872 bytes, duration 5 sec)
progress 24% (read 98957197312 bytes, duration 6 sec)
progress 25% (read 103080394752 bytes, duration 6 sec)
progress 26% (read 107203657728 bytes, duration 6 sec)
progress 27% (read 111326855168 bytes, duration 6 sec)
progress 28% (read 115450052608 bytes, duration 7 sec)
progress 29% (read 119573315584 bytes, duration 7 sec)
progress 30% (read 123696513024 bytes, duration 7 sec)
progress 31% (read 127819710464 bytes, duration 7 sec)
progress 32% (read 131942907904 bytes, duration 8 sec)
progress 33% (read 136066170880 bytes, duration 8 sec)
progress 34% (read 140189368320 bytes, duration 8 sec)
progress 35% (read 144312565760 bytes, duration 8 sec)
progress 36% (read 148435828736 bytes, duration 8 sec)
progress 37% (read 152559026176 bytes, duration 9 sec)
progress 38% (read 156682223616 bytes, duration 9 sec)
progress 39% (read 160805421056 bytes, duration 9 sec)
progress 40% (read 164928684032 bytes, duration 9 sec)
progress 41% (read 169051881472 bytes, duration 10 sec)
progress 42% (read 173175078912 bytes, duration 10 sec)
progress 43% (read 177298341888 bytes, duration 10 sec)
progress 44% (read 181421539328 bytes, duration 10 sec)
progress 45% (read 185544736768 bytes, duration 11 sec)
progress 46% (read 189667934208 bytes, duration 11 sec)
progress 47% (read 193791197184 bytes, duration 11 sec)
progress 48% (read 197914394624 bytes, duration 11 sec)
progress 49% (read 202037592064 bytes, duration 12 sec)
progress 50% (read 206160789504 bytes, duration 12 sec)
progress 51% (read 210284052480 bytes, duration 12 sec)
progress 52% (read 214407249920 bytes, duration 12 sec)
progress 53% (read 218530447360 bytes, duration 12 sec)
progress 54% (read 222653710336 bytes, duration 13 sec)
progress 55% (read 226776907776 bytes, duration 13 sec)
progress 56% (read 230900105216 bytes, duration 13 sec)
progress 57% (read 235023302656 bytes, duration 13 sec)
progress 58% (read 239146565632 bytes, duration 14 sec)
progress 59% (read 243269763072 bytes, duration 14 sec)
progress 60% (read 247392960512 bytes, duration 14 sec)
progress 61% (read 251516223488 bytes, duration 14 sec)
progress 62% (read 255639420928 bytes, duration 15 sec)
progress 63% (read 259762618368 bytes, duration 15 sec)
progress 64% (read 263885815808 bytes, duration 15 sec)
progress 65% (read 268009078784 bytes, duration 15 sec)
progress 66% (read 272132276224 bytes, duration 15 sec)
progress 67% (read 276255473664 bytes, duration 64 sec)
progress 68% (read 280378736640 bytes, duration 158 sec)
progress 69% (read 284501934080 bytes, duration 225 sec)
_06-14_33_40.vma.zst : Decoding error (36) : Data corruption detected 
vma: restore failed - short vma extent (2201600 < 3801600)
/bin/bash: line 1: 24784 Exit 1                  zstd -q -d -c /mnt/pve/SYN1/dump/vzdump-qemu-102-2025_05_06-14_33_40.vma.zst
     24785 Trace/breakpoint trap   | vma extract -v -r /var/tmp/vzdumptmp24747.fifo - /var/tmp/vzdumptmp24747
  Logical volume "vm-104-disk-2" successfully removed.
temporary volume 'local-lvm:vm-104-disk-2' successfully removed
  Logical volume "vm-104-disk-0" successfully removed.
temporary volume 'local-lvm:vm-104-disk-0' successfully removed
  Logical volume "vm-104-disk-3" successfully removed.
temporary volume 'local-lvm:vm-104-disk-3' successfully removed
  Logical volume "vm-104-disk-1" successfully removed.
temporary volume 'local-lvm:vm-104-disk-1' successfully removed
no lock found trying to remove 'create'  lock
error before or during data restore, some or all disks were not completely restored. VM 104 state is NOT cleaned up.
TASK ERROR: command 'set -o pipefail && zstd -q -d -c /mnt/pve/SYN1/dump/vzdump-qemu-102-2025_05_06-14_33_40.vma.zst | vma extract -v -r /var/tmp/vzdumptmp24747.fifo - /var/tmp/vzdumptmp24747' failed: exit code 133


pveversion -v:
Code:
Last login: Sat May 10 01:43:38 CDT 2025 from 10.22.22.61 on pts/0
root@zulu:~# pveversion -v~
^[[AUnknown option: v~
USAGE: pveversion [--verbose]
root@zulu:~# pveversion -v
proxmox-ve: 8.4.0 (running kernel: 6.8.12-10-pve)
pve-manager: 8.4.1 (running version: 8.4.1/2a5fa54a8503f96d)
proxmox-kernel-helper: 8.1.1
proxmox-kernel-6.8.12-10-pve-signed: 6.8.12-10
proxmox-kernel-6.8: 6.8.12-10
proxmox-kernel-6.8.12-9-pve-signed: 6.8.12-9
ceph-fuse: 17.2.8-pve2
corosync: 3.1.9-pve1
criu: 3.17.1-2+deb12u1
frr-pythontools: 10.2.2-1+pve1
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.30-pve2
libproxmox-acme-perl: 1.6.0
libproxmox-backup-qemu0: 1.5.1
libproxmox-rs-perl: 0.3.5
libpve-access-control: 8.2.2
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.1.0
libpve-cluster-perl: 8.1.0
libpve-common-perl: 8.3.1
libpve-guest-common-perl: 5.2.2
libpve-http-server-perl: 5.2.2
libpve-network-perl: 0.11.2
libpve-rs-perl: 0.9.4
libpve-storage-perl: 8.3.6
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.6.0-2
proxmox-backup-client: 3.4.1-1
proxmox-backup-file-restore: 3.4.1-1
proxmox-firewall: 0.7.1
proxmox-kernel-helper: 8.1.1
proxmox-mail-forward: 0.3.2
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.10
pve-cluster: 8.1.0
pve-container: 5.2.6
pve-docs: 8.4.0
pve-edk2-firmware: 4.2025.02-3
pve-esxi-import-tools: 0.7.4
pve-firewall: 5.1.1
pve-firmware: 3.15-3
pve-ha-manager: 4.0.7
pve-i18n: 3.4.2
pve-qemu-kvm: 9.2.0-5
pve-xtermjs: 5.5.0-2
qemu-server: 8.3.12
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.7-pve2

I see that the checksum failed but how is it that all 5 vm backups are corrupt but the templates are fine and all my other files are good. The storage is still showing good health and is being used for other purposes. If there is no other way to check this further? I will test some backup and restores this evening.
 
Last edited:
The backstory might be important. I originally only had the 2 servers both t630 w/ 512g ram, and 8x 1tb hdd raid 60, one has 2x 2687, the other 2x 2698. I decided to change my raid and switched to raid 10. I backed up to an ext drive attached to the synology. No compression any where along the way. The only thing that changed was the raid type.
 
I just created a fresh VM and did a backup/restore with the same outcome. the decoding/checksum seems to be what is failing. I am not sure what would cause it and I am trying to research it so looking forward to the forum's input on the matter. thx

I will be running hardware tests this evening to ensure no memory issues are present per this post.
@fiona this was the post referencing the manual restoration

I am attempting gzip compression to see if that works... looks like I will find out in 2 days at this rate.

The outcome of the attempted gzip restore:Failed
Code:
restore vma archive: zcat /mnt/pve/SYN1/dump/vzdump-qemu-102-2025_05_13-15_22_34.vma.gz | vma extract -v -r /var/tmp/vzdumptmp106510.fifo - /var/tmp/vzdumptmp106510
CFG: size: 923 name: qemu-server.conf
DEV: dev_id=1 size: 540672 devname: drive-efidisk0
DEV: dev_id=2 size: 107374182400 devname: drive-scsi0
DEV: dev_id=3 size: 4194304 devname: drive-tpmstate0-backup
CTIME: Tue May 13 15:22:47 2025
  Rounding up size to full physical extent 4.00 MiB
  Logical volume "vm-103-disk-0" created.
new volume ID is 'local-lvm:vm-103-disk-0'
  Logical volume "vm-103-disk-1" created.
new volume ID is 'local-lvm:vm-103-disk-1'
  Logical volume "vm-103-disk-2" created.
new volume ID is 'local-lvm:vm-103-disk-2'
map 'drive-efidisk0' to '/dev/pve/vm-103-disk-0' (write zeros = 0)
map 'drive-scsi0' to '/dev/pve/vm-103-disk-1' (write zeros = 0)
map 'drive-tpmstate0-backup' to '/dev/pve/vm-103-disk-2' (write zeros = 0)
progress 1% (read 1073807360 bytes, duration 10 sec)
progress 2% (read 2147614720 bytes, duration 21 sec)
progress 3% (read 3221422080 bytes, duration 31 sec)
progress 4% (read 4295163904 bytes, duration 41 sec)
progress 5% (read 5368971264 bytes, duration 52 sec)
progress 6% (read 6442778624 bytes, duration 62 sec)
progress 7% (read 7516585984 bytes, duration 72 sec)
progress 8% (read 8590327808 bytes, duration 83 sec)
progress 9% (read 9664135168 bytes, duration 94 sec)
progress 10% (read 10737942528 bytes, duration 104 sec)
progress 11% (read 11811684352 bytes, duration 114 sec)
progress 12% (read 12885491712 bytes, duration 124 sec)
progress 13% (read 13959299072 bytes, duration 135 sec)
progress 14% (read 15033106432 bytes, duration 145 sec)
progress 15% (read 16106848256 bytes, duration 156 sec)
progress 16% (read 17180655616 bytes, duration 167 sec)
progress 17% (read 18254462976 bytes, duration 177 sec)
progress 18% (read 19328204800 bytes, duration 188 sec)
progress 19% (read 20402012160 bytes, duration 198 sec)
progress 20% (read 21475819520 bytes, duration 206 sec)
progress 21% (read 22549626880 bytes, duration 206 sec)
progress 22% (read 23623368704 bytes, duration 206 sec)
progress 23% (read 24697176064 bytes, duration 206 sec)
progress 24% (read 25770983424 bytes, duration 206 sec)
progress 25% (read 26844725248 bytes, duration 206 sec)
progress 26% (read 27918532608 bytes, duration 206 sec)
progress 27% (read 28992339968 bytes, duration 207 sec)
progress 28% (read 30066147328 bytes, duration 207 sec)
progress 29% (read 31139889152 bytes, duration 207 sec)
progress 30% (read 32213696512 bytes, duration 207 sec)
progress 31% (read 33287503872 bytes, duration 207 sec)
progress 32% (read 34361311232 bytes, duration 207 sec)
progress 33% (read 35435053056 bytes, duration 207 sec)
progress 34% (read 36508860416 bytes, duration 207 sec)
progress 35% (read 37582667776 bytes, duration 207 sec)
progress 36% (read 38656409600 bytes, duration 207 sec)
progress 37% (read 39730216960 bytes, duration 207 sec)
progress 38% (read 40804024320 bytes, duration 207 sec)
progress 39% (read 41877831680 bytes, duration 207 sec)
progress 40% (read 42951573504 bytes, duration 208 sec)
progress 41% (read 44025380864 bytes, duration 208 sec)
progress 42% (read 45099188224 bytes, duration 208 sec)
progress 43% (read 46172930048 bytes, duration 208 sec)
progress 44% (read 47246737408 bytes, duration 208 sec)
progress 45% (read 48320544768 bytes, duration 208 sec)
progress 46% (read 49394352128 bytes, duration 208 sec)
progress 47% (read 50468093952 bytes, duration 208 sec)
progress 48% (read 51541901312 bytes, duration 208 sec)
progress 49% (read 52615708672 bytes, duration 208 sec)
progress 50% (read 53689450496 bytes, duration 208 sec)
progress 51% (read 54763257856 bytes, duration 208 sec)
progress 52% (read 55837065216 bytes, duration 208 sec)
progress 53% (read 56910872576 bytes, duration 208 sec)
progress 54% (read 57984614400 bytes, duration 208 sec)
progress 55% (read 59058421760 bytes, duration 208 sec)
progress 56% (read 60132229120 bytes, duration 208 sec)
progress 57% (read 61206036480 bytes, duration 209 sec)
progress 58% (read 62279778304 bytes, duration 209 sec)
progress 59% (read 63353585664 bytes, duration 209 sec)
progress 60% (read 64427393024 bytes, duration 209 sec)
progress 61% (read 65501134848 bytes, duration 209 sec)
progress 62% (read 66574942208 bytes, duration 209 sec)
progress 63% (read 67648749568 bytes, duration 209 sec)
progress 64% (read 68722556928 bytes, duration 209 sec)
progress 65% (read 69796298752 bytes, duration 209 sec)
progress 66% (read 70870106112 bytes, duration 209 sec)
progress 67% (read 71943913472 bytes, duration 209 sec)
progress 68% (read 73017655296 bytes, duration 209 sec)
progress 69% (read 74091462656 bytes, duration 209 sec)
progress 70% (read 75165270016 bytes, duration 209 sec)
progress 71% (read 76239077376 bytes, duration 209 sec)
progress 72% (read 77312819200 bytes, duration 209 sec)
progress 73% (read 78386626560 bytes, duration 209 sec)
progress 74% (read 79460433920 bytes, duration 209 sec)
progress 75% (read 80534175744 bytes, duration 209 sec)
progress 76% (read 81607983104 bytes, duration 209 sec)
progress 77% (read 82681790464 bytes, duration 210 sec)
progress 78% (read 83755597824 bytes, duration 210 sec)
progress 79% (read 84829339648 bytes, duration 210 sec)
progress 80% (read 85903147008 bytes, duration 210 sec)
progress 81% (read 86976954368 bytes, duration 210 sec)
progress 82% (read 88050761728 bytes, duration 210 sec)
progress 83% (read 89124503552 bytes, duration 210 sec)
progress 84% (read 90198310912 bytes, duration 210 sec)
progress 85% (read 91272118272 bytes, duration 210 sec)
progress 86% (read 92345860096 bytes, duration 210 sec)
progress 87% (read 93419667456 bytes, duration 210 sec)
progress 88% (read 94493474816 bytes, duration 210 sec)
progress 89% (read 95567282176 bytes, duration 210 sec)
progress 90% (read 96641024000 bytes, duration 210 sec)
progress 91% (read 97714831360 bytes, duration 210 sec)
progress 92% (read 98788638720 bytes, duration 210 sec)
progress 93% (read 99862380544 bytes, duration 210 sec)
progress 94% (read 100936187904 bytes, duration 210 sec)
progress 95% (read 102009995264 bytes, duration 210 sec)
progress 96% (read 103083802624 bytes, duration 211 sec)
progress 97% (read 104157544448 bytes, duration 211 sec)
progress 98% (read 105231351808 bytes, duration 211 sec)
progress 99% (read 106305159168 bytes, duration 211 sec)

gzip: /mnt/pve/SYN1/dump/vzdump-qemu-102-2025_05_13-15_22_34.vma.gz: invalid compressed data--crc error
progress 100% (read 107378900992 bytes, duration 217 sec)
total bytes read 107378966528, sparse bytes 85740261376 (79.8%)
space reduction due to 4K zero blocks 0.00895%
  Logical volume "vm-103-disk-0" successfully removed.
temporary volume 'local-lvm:vm-103-disk-0' successfully removed
  Logical volume "vm-103-disk-2" successfully removed.
temporary volume 'local-lvm:vm-103-disk-2' successfully removed
  Logical volume "vm-103-disk-1" successfully removed.
temporary volume 'local-lvm:vm-103-disk-1' successfully removed
no lock found trying to remove 'create'  lock
error before or during data restore, some or all disks were not completely restored. VM 103 state is NOT cleaned up.
TASK ERROR: command 'set -o pipefail && zcat /mnt/pve/SYN1/dump/vzdump-qemu-102-2025_05_13-15_22_34.vma.gz | vma extract -v -r /var/tmp/vzdumptmp106510.fifo - /var/tmp/vzdumptmp106510' failed: exit code 1

I am attempting same location with no compression.
No compression seems to yield a successful restoration from the smb location. I am going to test the last compression

Code:
restore vma archive: lzop -d -c /mnt/pve/SYN1/dump/vzdump-qemu-102-2025_05_13-18_40_03.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp136458.fifo - /var/tmp/vzdumptmp136458
CFG: size: 923 name: qemu-server.conf
DEV: dev_id=1 size: 540672 devname: drive-efidisk0
DEV: dev_id=2 size: 107374182400 devname: drive-scsi0
DEV: dev_id=3 size: 4194304 devname: drive-tpmstate0-backup
CTIME: Tue May 13 18:40:11 2025
  Rounding up size to full physical extent 4.00 MiB
  Logical volume "vm-103-disk-0" created.
new volume ID is 'local-lvm:vm-103-disk-0'
  Logical volume "vm-103-disk-1" created.
new volume ID is 'local-lvm:vm-103-disk-1'
  Logical volume "vm-103-disk-2" created.
new volume ID is 'local-lvm:vm-103-disk-2'
map 'drive-efidisk0' to '/dev/pve/vm-103-disk-0' (write zeros = 0)
map 'drive-scsi0' to '/dev/pve/vm-103-disk-1' (write zeros = 0)
map 'drive-tpmstate0-backup' to '/dev/pve/vm-103-disk-2' (write zeros = 0)
progress 1% (read 1073807360 bytes, duration 10 sec)
progress 2% (read 2147614720 bytes, duration 20 sec)
progress 3% (read 3221422080 bytes, duration 31 sec)
progress 4% (read 4295163904 bytes, duration 41 sec)
progress 5% (read 5368971264 bytes, duration 51 sec)
progress 6% (read 6442778624 bytes, duration 62 sec)
progress 7% (read 7516585984 bytes, duration 72 sec)
progress 8% (read 8590327808 bytes, duration 82 sec)
lzop: /mnt/pve/SYN1/dump/vzdump-qemu-102-2025_05_13-18_40_03.vma.lzo: Checksum error
vma: restore failed - short vma extent (3722752 < 3801600)
/bin/bash: line 1: 136468 Exit 1                  lzop -d -c /mnt/pve/SYN1/dump/vzdump-qemu-102-2025_05_13-18_40_03.vma.lzo
     136469 Trace/breakpoint trap   | vma extract -v -r /var/tmp/vzdumptmp136458.fifo - /var/tmp/vzdumptmp136458
  Logical volume "vm-103-disk-0" successfully removed.
temporary volume 'local-lvm:vm-103-disk-0' successfully removed
  Logical volume "vm-103-disk-2" successfully removed.
temporary volume 'local-lvm:vm-103-disk-2' successfully removed
  Logical volume "vm-103-disk-1" successfully removed.
temporary volume 'local-lvm:vm-103-disk-1' successfully removed
no lock found trying to remove 'create'  lock
error before or during data restore, some or all disks were not completely restored. VM 103 state is NOT cleaned up.
TASK ERROR: command 'set -o pipefail && lzop -d -c /mnt/pve/SYN1/dump/vzdump-qemu-102-2025_05_13-18_40_03.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp136458.fifo - /var/tmp/vzdumptmp136458' failed: exit code 133

I comfortable saying it has to do with the compression but why is still a mystery. I am attempting the PBS setup, maybe I can workaround this issue that way.

Final edit: PBS works great, zstd works fine and backups function better than before. Why stand alone backups is a nightmare is beyond me, I would love to still get your feedback and possibly test further on it. Thx
 
Last edited:
could you try VMA backup to a local disk with compression? I suspect some bad interaction between your samba share and writing using a pipeline (without compression, the backup is written directly, with compression, it's written to STDOUT and piped to the compression program).