Cannot restore VM backup image at 5.0beta1(5.0-5) : ERROR **: restore failed - wrong vma extent

tukiyo3

Well-Known Member
Sep 13, 2015
30
1
48
  • VM backup image restored fail with error meesage "ERROR **: restore failed - wrong vma extent".
  • CT backup image restored success.

## 1. backup VM image at 5.0-5/c155b5bc

```
INFO: starting new backup job: vzdump 1053 --node pve50b1 --compress lzo --storage g1000 --mode snapshot --remove 0
INFO: Starting Backup of VM 1053 (qemu)
INFO: status = running
INFO: update VM 1053: -lock backup
INFO: VM Name: dev53.local
INFO: include disk 'virtio0' 'g3000:1053/vm-1053-disk-1.qcow2' 16G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating archive '/mnt/g1000/dump/vzdump-qemu-1053-2017_03_27-05_01_24.vma.lzo'
INFO: started backup task 'ed7a1227-0185-45a8-be5b-c190f93cb2b3'
INFO: status: 16% (2892627968/17179869184), sparse 15% (2741231616), duration 3, 964/50 MB/s
..
INFO: status: 100% (17179869184/17179869184), sparse 78% (13547585536), duration 39, 950/55 MB/s
INFO: transferred 17179 MB in 39 seconds (440 MB/s)
INFO: archive file size: 2.09GB
INFO: Finished Backup of VM 1053 (00:00:41)
INFO: Backup job finished successfully
TASK OK
```

## 2. try to restore

  • I tried at 5.0-5, 5.0-4 and 4.4-13/7ea56165 with 5.0-5's VM backup image restore failed.
  • 5.0-5 with 4.4's VM backup image is restore successfull.

```
restore vma archive: lzop -d -c /mnt/gb465_1/dump/vzdump-qemu-1053-2017_03_27-05_01_24.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp21195.fifo - /var/tmp/vzdumptmp21195
CFG: size: 366 name: qemu-server.conf
DEV: dev_id=1 size: 17179869184 devname: drive-virtio0
CTIME: Mon Mar 27 05:01:25 2017
Formatting '/mnt/gb230/images/1053/vm-1053-disk-1.qcow2', fmt=qcow2 size=17179869184 encryption=off cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16
new volume ID is 'gb230:1053/vm-1053-disk-1.qcow2'
map 'drive-virtio0' to '/mnt/gb230/images/1053/vm-1053-disk-1.qcow2' (write zeros = 0)

** (process:21198): ERROR **: restore failed - wrong vma extent header chechsum
/bin/bash: line 1: 21197 Broken pipe lzop -d -c /mnt/gb465_1/dump/vzdump-qemu-1053-2017_03_27-05_01_24.vma.lzo
21198 Trace/breakpoint trap | vma extract -v -r /var/tmp/vzdumptmp21195.fifo - /var/tmp/vzdumptmp21195
temporary volume 'gb230:1053/vm-1053-disk-1.qcow2' sucessfuly removed
TASK ERROR: command 'lzop -d -c /mnt/gb465_1/dump/vzdump-qemu-1053-2017_03_27-05_01_24.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp21195.fifo - /var/tmp/vzdumptmp21195' failed: exit code 133
```
 
Last edited:
Just tested, same here, please fix very soon and update packages, because I need a realiable backup/restore to keep testing 5 beta
here my output
Code:
restore vma archive: lzop -d -c /mnt/raid1backup/proxmox/dump/vzdump-qemu-100-2017_03_29-17_24_58.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp6269.fifo - /var/tmp/vzdumptmp6269
CFG: size: 337 name: qemu-server.conf
DEV: dev_id=1 size: 53687091200 devname: drive-scsi0
CTIME: Wed Mar 29 17:25:00 2017
  Using default stripesize 64.00 KiB.
  Logical volume "vm-999-disk-1" created.
new volume ID is 'local-lvm:vm-999-disk-1'
map 'drive-scsi0' to '/dev/pve/vm-999-disk-1' (write zeros = 0)

** (process:6272): ERROR **: restore failed - wrong vma extent header chechsum
/bin/bash: line 1:  6271 Broken pipe             lzop -d -c /mnt/raid1backup/proxmox/dump/vzdump-qemu-100-2017_03_29-17_24_58.vma.lzo
      6272 Trace/breakpoint trap   | vma extract -v -r /var/tmp/vzdumptmp6269.fifo - /var/tmp/vzdumptmp6269
  Logical volume "vm-999-disk-1" successfully removed
temporary volume 'local-lvm:vm-999-disk-1' sucessfuly removed
TASK ERROR: command 'lzop -d -c /mnt/raid1backup/proxmox/dump/vzdump-qemu-100-2017_03_29-17_24_58.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp6269.fifo - /var/tmp/vzdumptmp6269' failed: exit code 133

the backup size of the file is 6 GB, that sounds ok compared with the original disk occupied

Backup was:
Code:
INFO: starting new backup job: vzdump 100 --mode snapshot --storage raid1 --remove 0 --compress lzo --node prox01
INFO: Starting Backup of VM 100 (qemu)
INFO: status = stopped
INFO: update VM 100: -lock backup
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: win2012r2-DC
INFO: include disk 'scsi0' 'local-lvm:vm-100-disk-1' 50G
INFO: creating archive '/mnt/raid1backup/proxmox/dump/vzdump-qemu-100-2017_03_29-17_24_58.vma.lzo'
INFO: starting kvm to execute backup task
INFO: started backup task 'a3a882de-79e4-414c-8c23-e13bc12ccc97'
INFO: status: 1% (676593664/53687091200), sparse 0% (93036544), duration 3, 225/194 MB/s
INFO: status: 2% (1269563392/53687091200), sparse 0% (227606528), duration 6, 197/152 MB/s
INFO: status: 3% (1801715712/53687091200), sparse 0% (228749312), duration 9, 177/177 MB/s
[...]
INFO: status: 17% (9510322176/53687091200), sparse 0% (503578624), duration 56, 267/267 MB/s
INFO: status: 23% (12520259584/53687091200), sparse 5% (2877243392), duration 59, 1003/212 MB/s
INFO: status: 36% (19620298752/53687091200), sparse 18% (9977282560), duration 62, 2366/0 MB/s
INFO: status: 49% (26641629184/53687091200), sparse 31% (16998612992), duration 65, 2340/0 MB/s
INFO: status: 62% (33671938048/53687091200), sparse 44% (24028921856), duration 68, 2343/0 MB/s
INFO: status: 75% (40720793600/53687091200), sparse 57% (31077777408), duration 71, 2349/0 MB/s
INFO: status: 88% (47704965120/53687091200), sparse 70% (38061948928), duration 74, 2328/0 MB/s
INFO: status: 100% (53687091200/53687091200), sparse 82% (44044070912), duration 77, 1994/0 MB/s
INFO: transferred 53687 MB in 77 seconds (697 MB/s)
INFO: stopping kvm after backup task
INFO: archive file size: 5.95GB
INFO: Finished Backup of VM 100 (00:01:24)
INFO: Backup job finished successfully
TASK OK
 
I've to add that I've transferred the backup into a 4.3 proxmox, and seems broken there too
Code:
lzop -d -c vzdump-qemu-100-2017_03_29-17_24_58.vma.lzo |vma verify -

** (process:10046): ERROR **: verify failed - wrong vma extent header chechsum
Trace/breakpoint trap
so seems that is the backup part that fails in some way
 
I've done some further test:
- changing scheduler to older default, deadline, don't solve the problem
- without compression the backup works fine (vma verify is ok, and also the restored vm seems ok)
- with gzip we have the same issue
- with lzop we have the same issue
- all the above with GUI or vzdump command (both produce the same result in ok/broken)

So, so far, seems a bug related to compression.
Btw, gzip compression saturates CPU (100%) and produces a 15Mb/s compared to 200MB/s I have with lzo or no compression (Xeon E5-2620), maye that option shold be removed at all.
Also, would be great to have an optional flag to set a verification (vma verify) after backups!
Hope someone will have a look at this bug that seems ignored so far in the forum by proxmox member (not complaining, just is strange since they usually are very responsive and also have interest in support beta tester of 5 - and I hope they produce packages updated frequently in pvetest repo to include recent fixs for 5)
 
I've done some further test:
- changing scheduler to older default, deadline, don't solve the problem
- without compression the backup works fine (vma verify is ok, and also the restored vm seems ok)
- with gzip we have the same issue
- with lzop we have the same issue
- all the above with GUI or vzdump command (both produce the same result in ok/broken)

So, so far, seems a bug related to compression.
Btw, gzip compression saturates CPU (100%) and produces a 15Mb/s compared to 200MB/s I have with lzo or no compression (Xeon E5-2620), maye that option shold be removed at all.
Also, would be great to have an optional flag to set a verification (vma verify) after backups!
Hope someone will have a look at this bug that seems ignored so far in the forum by proxmox member (not complaining, just is strange since they usually are very responsive and also have interest in support beta tester of 5 - and I hope they produce packages updated frequently in pvetest repo to include recent fixs for 5)

sorry for the silence - we are currently debugging the issue and can reproduce it in some situations. as soon as a fix is available, it will be released via the usual channels (git -> internal testing -> external testing)
 
sorry for the silence - we are currently debugging the issue and can reproduce it in some situations. as soon as a fix is available, it will be released via the usual channels (git -> internal testing -> external testing)
Problem seems solved with yesterday's updates from 5.x test repo, thanks a lot
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!