[SOLVED] defect backup: partly restore disks from broken backup

Maddes

Member
Mar 26, 2020
3
1
23
Unfortunately I have a broken backup from a VM with 2 disks (the error is within the zstd structure).
If I try to restore it, then the restore runs up to 40% (40/100GB) and I can see that the 2 disks are build up to this point.
But when the error occurs, then the 2 disks are deleted.

Is there a way to keep these partly restored disks and mount them somewhere (host, VM, wherever)? And how?
If this is possible I could have the possibility to restore at least some of the files in their latest state.

Code:
# zstd -t /var/lib/vz/dump/vzdump-qemu-101-2020_11_28-13_53_35.vma.zst
_28-13_53_35.vma.zst : 34242 MB...     _28-13_53_35.vma.zst : Decoding error (36) : Destination buffer is too small

Code:
# pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
pve-kernel-5.4: 6.3-3
pve-kernel-helper: 6.3-3
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-2
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.3-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.5-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-cluster: 6.2-1
pve-container: 3.3-1
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-7
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1
 
Thanks Fabian for pointing out the correct location inside the script.
Note that you have to reboot the host/node to apply the changes. Maybe there is a simpler way.
P.S.: "sucessfuly" is a typo and should be "successfully"

I then followed the Wiki page "Moving disk image from one KVM machine to another" to move the disk to another vm.
The wiki page should be enhanced for ZFS to use zfs list to get the correct path for zfs rename.
I got access to all my files.

Code:
root@h001:~# grep -Rn -e 'restore vma archive:' -e 'sucessfuly removed' /usr/share/perl5/PVE
/usr/share/perl5/PVE/QemuServer.pm:5650:                    print STDERR "temporary volume '$volid' sucessfuly removed\n";
>>> 5635: sub tar_restore_cleanup < 6439 sub restore_tar_archive
/usr/share/perl5/PVE/QemuServer.pm:5907:            print STDERR "temporary volume '$volid' sucessfuly removed\n";
>>> 5895: sub restore_destroy_volumes < 6054 sub restore_proxmox_backup_archive & 6227 sub restore_vma_archive
/usr/share/perl5/PVE/QemuServer.pm:6411:        print "restore vma archive: $dbg_cmdstring\n";
>>> 6227: sub restore_vma_archive > sub restore_destroy_volumes
 
Last edited:
  • Like
Reactions: fabian
in case you ever need to do something like this again: restarting pveproxy and pvedaemon is enough to reload all the API stuff. when using qm/pct/pvesm/.. not even that is necessary, those use the perl modules directly and load them anew on each invocation.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!