Restore a v5 backup to v4: wrong vma extent header chechsum

Oct 6, 2016
14
0
66
Hi guys

I tested v5 beta and made a vm which I still need. So I backed it up to my external nfs storage and the reinstalled stable v4 (latest version with subscription). Then I tried to restore this vm and it gives me error:

restore vma archive: lzop -d -c /mnt/pve/backup/dump/vzdump-qemu-100-2017_04_26-03_00_02.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp2758.fifo - /var/tmp/vzdumptmp2758
CFG: size: 339 name: qemu-server.conf
DEV: dev_id=1 size: 214748364800 devname: drive-scsi0
CTIME: Wed Apr 26 03:00:02 2017
new volume ID is 'raid1:vm-100-disk-1'
map 'drive-scsi0' to '/dev/zvol/storage/vm-100-disk-1' (write zeros = 0)

** (process:2761): ERROR **: restore failed - wrong vma extent header chechsum
/bin/bash: line 1: 2760 Broken pipe lzop -d -c /mnt/pve/backup/dump/vzdump-qemu-100-2017_04_26-03_00_02.vma.lzo
2761 Trace/breakpoint trap | vma extract -v -r /var/tmp/vzdumptmp2758.fifo - /var/tmp/vzdumptmp2758
temporary volume 'raid1:vm-100-disk-1' sucessfuly removed
TASK ERROR: command 'lzop -d -c /mnt/pve/backup/dump/vzdump-qemu-100-2017_04_26-03_00_02.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp2758.fifo - /var/tmp/vzdumptmp2758' failed: exit code 133

Any idea what the reason is (version mismatch because downgrade?) and how I get them back on v4 with my backup?

Thanks
Daniel
 
There was an issue in the beta with qemu <2.9.0-1~rc2+4 and the way we wrote the backups (mostly affected compressed backups). Do you have other/older backups?
 
There was an issue in the beta with qemu <2.9.0-1~rc2+4 and the way we wrote the backups (mostly affected compressed backups). Do you have other/older backups?

No, I have no older backups....

In the meantime I have setup a new server again with v5 beta and its the same:

restore vma archive: lzop -d -c /mnt/pve/backup/dump/vzdump-qemu-100-2017_04_26-09_00_01.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp1918.fifo - /var/tmp/vzdumptmp1918
CFG: size: 339 name: qemu-server.conf
DEV: dev_id=1 size: 214748364800 devname: drive-scsi0
CTIME: Wed Apr 26 09:00:02 2017
Formatting '/mnt/pve/backup/images/100/vm-100-disk-2.raw', fmt=raw size=214748364800
new volume ID is 'backup:100/vm-100-disk-2.raw'
map 'drive-scsi0' to '/mnt/pve/backup/images/100/vm-100-disk-2.raw' (write zeros = 0)

** (process:1921): ERROR **: restore failed - wrong vma extent header chechsum
/bin/bash: line 1: 1920 Broken pipe lzop -d -c /mnt/pve/backup/dump/vzdump-qemu-100-2017_04_26-09_00_01.vma.lzo
1921 Trace/breakpoint trap | vma extract -v -r /var/tmp/vzdumptmp1918.fifo - /var/tmp/vzdumptmp1918
temporary volume 'backup:100/vm-100-disk-2.raw' sucessfuly removed
TASK ERROR: command 'lzop -d -c /mnt/pve/backup/dump/vzdump-qemu-100-2017_04_26-09_00_01.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp1918.fifo - /var/tmp/vzdumptmp1918' failed: exit code 133

any idea how to fix this?
 
I did the backup yesterday. And the installation after updates (on ths server without subscription) on the v5 is now:

proxmox-ve: 5.0-2 (running kernel: 4.10.1-2-pve)
pve-manager: 5.0-5 (running version: 5.0-5/c155b5bc)
pve-kernel-4.10.1-2-pve: 4.10.1-2
libpve-http-server-perl: 2.0-1
lvm2: 2.02.168-pve1
corosync: 2.4.2-pve2
libqb0: 1.0.1-1
pve-cluster: 5.0-3
qemu-server: 5.0-1
pve-firmware: 2.0-1
libpve-common-perl: 5.0-3
libpve-guest-common-perl: 2.0-1
libpve-access-control: 5.0-1
libpve-storage-perl: 5.0-2
pve-libspice-server1: 0.12.8-3
vncterm: 1.4-1
pve-docs: 5.0-1
pve-qemu-kvm: 2.7.1-500
pve-container: 2.0-4
pve-firewall: 3.0-1
pve-ha-manager: 2.0-1
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.7-500
lxcfs: 2.0.6-pve500
criu: 2.11.1-1~bpo90
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.9-pve16~bpo90
 
I did the backup yesterday. And the installation after updates (on ths server without subscription) on the v5 is now:

proxmox-ve: 5.0-2 (running kernel: 4.10.1-2-pve)
pve-manager: 5.0-5 (running version: 5.0-5/c155b5bc)
pve-kernel-4.10.1-2-pve: 4.10.1-2
libpve-http-server-perl: 2.0-1
lvm2: 2.02.168-pve1
corosync: 2.4.2-pve2
libqb0: 1.0.1-1
pve-cluster: 5.0-3
qemu-server: 5.0-1
pve-firmware: 2.0-1
libpve-common-perl: 5.0-3
libpve-guest-common-perl: 2.0-1
libpve-access-control: 5.0-1
libpve-storage-perl: 5.0-2
pve-libspice-server1: 0.12.8-3
vncterm: 1.4-1
pve-docs: 5.0-1
pve-qemu-kvm: 2.7.1-500
pve-container: 2.0-4
pve-firewall: 3.0-1
pve-ha-manager: 2.0-1
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.7-500
lxcfs: 2.0.6-pve500
criu: 2.11.1-1~bpo90
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.9-pve16~bpo90

those are not the current beta packages - please enable the pvetest repository and upgrade again!
 
oh sh..! Now upgraded:

proxmox-ve: 5.0-6 (running kernel: 4.10.8-1-pve)
pve-manager: 5.0-9 (running version: 5.0-9/c7bdd872)
pve-kernel-4.10.1-2-pve: 4.10.1-2
pve-kernel-4.10.8-1-pve: 4.10.8-6
libpve-http-server-perl: 2.0-2
lvm2: 2.02.168-pve2
corosync: 2.4.2-pve2
libqb0: 1.0.1-1
pve-cluster: 5.0-4
qemu-server: 5.0-4
pve-firmware: 2.0-2
libpve-common-perl: 5.0-8
libpve-guest-common-perl: 2.0-1
libpve-access-control: 5.0-3
libpve-storage-perl: 5.0-3
pve-libspice-server1: 0.12.8-3
vncterm: 1.4-1
pve-docs: 5.0-1
pve-qemu-kvm: 2.9.0-1
pve-container: 2.0-6
pve-firewall: 3.0-1
pve-ha-manager: 2.0-1
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.7-500
lxcfs: 2.0.6-pve500
criu: 2.11.1-1~bpo90
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.9-pve16~bpo90

Is this now latest beta?

But still same problem.... any chance to get my backup restored and up and running?
 
oh sh..! Now upgraded:

proxmox-ve: 5.0-6 (running kernel: 4.10.8-1-pve)
pve-manager: 5.0-9 (running version: 5.0-9/c7bdd872)
pve-kernel-4.10.1-2-pve: 4.10.1-2
pve-kernel-4.10.8-1-pve: 4.10.8-6
libpve-http-server-perl: 2.0-2
lvm2: 2.02.168-pve2
corosync: 2.4.2-pve2
libqb0: 1.0.1-1
pve-cluster: 5.0-4
qemu-server: 5.0-4
pve-firmware: 2.0-2
libpve-common-perl: 5.0-8
libpve-guest-common-perl: 2.0-1
libpve-access-control: 5.0-3
libpve-storage-perl: 5.0-3
pve-libspice-server1: 0.12.8-3
vncterm: 1.4-1
pve-docs: 5.0-1
pve-qemu-kvm: 2.9.0-1
pve-container: 2.0-6
pve-firewall: 3.0-1
pve-ha-manager: 2.0-1
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.7-500
lxcfs: 2.0.6-pve500
criu: 2.11.1-1~bpo90
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.9-pve16~bpo90

Is this now latest beta?

But still same problem.... any chance to get my backup restored and up and running?

the bug affected creating the backups, not restoring them. any backup affected by the bug is corrupt. upgrading doesn't magically fix this unfortunately, it just means that future backups work. or do you actually mean that a new backup created after the upgrade (and a restart of the VM) is also not restorable? that would be a new bug..