cifs: restore from lzo backup failed, from gzip - succeeded

yarick123

Member
Mar 14, 2013
27
2
23
Hello,

we have a 3 node PVE 5.2-5 cluster. There is a vm, which lzo backups (CIFS storage) cannot be restored. The error message is:

...
progress 55% (read 37205180416 bytes, duration 388 sec)
lzop: /mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_07-11_44_43.vma.lzo: Checksum error

** (process:6537): ERROR **: restore failed - short vma extent (3646464 < 3801600)
/bin/bash: line 1: 6536 Exit 1 lzop -d -c /mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_07-11_44_43.vma.lzo
...
failed: exit code 133

Interesting is, that there is a suspicious warning during the backup, but the backup is qualified as succeeded:
...
INFO: status: 100% (67645734912/67645734912), sparse 25% (17109598208), duration 846, read/write 66/65 MB/s
INFO: transferred 67645 MB in 846 seconds (79 MB/s)
Warning: unable to close filehandle GEN664 properly: Input/output error at /usr/share/perl5/PVE/VZDump/QemuServer.pm line 504.
...
INFO: Backup job finished successfully
TASK OK

There are no such problems with gzip-backups of the same machine. The problem is now 100% reproducible. We have tried to backup/restore as minimum 4 times.

Lzo backup/restore succeeds, if a local backup storage is used.

@PVE-team maybe it is better to qualify "unable to close filehandle GEN664 properly: Input/output error at /usr/share/perl5/PVE/VZDump/QemuServer.pm line 504." as an error and not as a warning?

P.S. All backups were made after shutting down the vm.

proxmox-ve: 5.2-2 (running kernel: 4.15.17-3-pve)
pve-manager: 5.2-5 (running version: 5.2-5/eb24855a)
pve-kernel-4.15: 5.2-3
pve-kernel-4.15.17-3-pve: 4.15.17-14
pve-kernel-4.15.17-2-pve: 4.15.17-10
pve-kernel-4.15.17-1-pve: 4.15.17-9
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-35
libpve-guest-common-perl: 2.0-17
libpve-http-server-perl: 2.0-9
libpve-storage-perl: 5.0-23
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-3
lxcfs: 3.0.0-1
novnc-pve: 1.0.0-1
proxmox-widget-toolkit: 1.0-19
pve-cluster: 5.0-27
pve-container: 2.0-24
pve-docs: 5.2-4
pve-firewall: 3.0-12
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-5
qemu-server: 5.0-29
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3

INFO: starting new backup job: vzdump 9114 --mode stop --remove 0 --compress lzo --storage netback --node prox5mox03
INFO: Starting Backup of VM 9114 (qemu)
INFO: status = stopped
INFO: update VM 9114: -lock backup
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: frickelfix2.intern.netmedia.de
INFO: include disk 'virtio1' 'vmimages:9114/vm-9114-disk-1.qcow2' 16G
INFO: include disk 'virtio2' 'vmimages:9114/vm-9114-disk-2.qcow2' 15G
INFO: include disk 'virtio3' 'vmimages:9114/vm-9114-disk-3.qcow2' 32G
INFO: skip unused drive 'containers:vm-9114-disk-1' (not included into backup)
INFO: snapshots found (not included into backup)
INFO: creating archive '/mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_07-11_44_43.vma.lzo'
INFO: starting kvm to execute backup task
INFO: started backup task '4f945329-0849-479b-a1ba-6507301fd7fb'
INFO: status: 0% (345243648/67645734912), sparse 0% (155860992), duration 3, read/write 115/63 MB/s
INFO: status: 1% (682688512/67645734912), sparse 0% (254701568), duration 8, read/write 67/47 MB/s
INFO: status: 2% (1445593088/67645734912), sparse 0% (481198080), duration 17, read/write 84/59 MB/s
INFO: status: 3% (2075394048/67645734912), sparse 1% (734281728), duration 23, read/write 104/62 MB/s
...
INFO: status: 98% (66304737280/67645734912), sparse 25% (17106046976), duration 827, read/write 45/45 MB/s
INFO: status: 99% (66983624704/67645734912), sparse 25% (17106046976), duration 836, read/write 75/75 MB/s
INFO: status: 100% (67645734912/67645734912), sparse 25% (17109598208), duration 846, read/write 66/65 MB/s
INFO: transferred 67645 MB in 846 seconds (79 MB/s)
Warning: unable to close filehandle GEN664 properly: Input/output error at /usr/share/perl5/PVE/VZDump/QemuServer.pm line 504.
INFO: stopping kvm after backup task
INFO: archive file size: 40.66GB
INFO: Finished Backup of VM 9114 (00:15:10)
INFO: Backup job finished successfully
TASK OK
restore vma archive: lzop -d -c /mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_07-11_44_43.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp6534.fifo - /var/tmp/vzdumptmp6534
CFG: size: 577 name: qemu-server.conf
DEV: dev_id=1 size: 17179869184 devname: drive-virtio1
DEV: dev_id=2 size: 16106127360 devname: drive-virtio2
DEV: dev_id=3 size: 34359738368 devname: drive-virtio3
CTIME: Tue Aug 7 11:44:47 2018
Formatting '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-1.qcow2', fmt=qcow2 size=17179869184 cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16
new volume ID is 'vmimages:9115/vm-9115-disk-1.qcow2'
map 'drive-virtio1' to '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-1.qcow2' (write zeros = 0)
Formatting '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-2.qcow2', fmt=qcow2 size=16106127360 cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16
new volume ID is 'vmimages:9115/vm-9115-disk-2.qcow2'
map 'drive-virtio2' to '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-2.qcow2' (write zeros = 0)
Formatting '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-3.qcow2', fmt=qcow2 size=34359738368 cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16
new volume ID is 'vmimages:9115/vm-9115-disk-3.qcow2'
map 'drive-virtio3' to '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-3.qcow2' (write zeros = 0)
progress 1% (read 676462592 bytes, duration 3 sec)
progress 2% (read 1352925184 bytes, duration 9 sec)
progress 3% (read 2029387776 bytes, duration 12 sec)
...
progress 53% (read 35852255232 bytes, duration 370 sec)
progress 54% (read 36528717824 bytes, duration 382 sec)
progress 55% (read 37205180416 bytes, duration 388 sec)
lzop: /mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_07-11_44_43.vma.lzo: Checksum error

** (process:6537): ERROR **: restore failed - short vma extent (3646464 < 3801600)
/bin/bash: line 1: 6536 Exit 1 lzop -d -c /mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_07-11_44_43.vma.lzo
6537 Trace/breakpoint trap | vma extract -v -r /var/tmp/vzdumptmp6534.fifo - /var/tmp/vzdumptmp6534
temporary volume 'vmimages:9115/vm-9115-disk-1.qcow2' sucessfuly removed
temporary volume 'vmimages:9115/vm-9115-disk-3.qcow2' sucessfuly removed
temporary volume 'vmimages:9115/vm-9115-disk-2.qcow2' sucessfuly removed
no lock found trying to remove 'create' lock
TASK ERROR: command 'set -o pipefail && lzop -d -c /mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_07-11_44_43.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp6534.fifo - /var/tmp/vzdumptmp6534' failed: exit code 133

INFO: starting new backup job: vzdump 9114 --mode stop --storage netback --node prox5mox03 --compress gzip --remove 0
INFO: Starting Backup of VM 9114 (qemu)
INFO: status = stopped
INFO: update VM 9114: -lock backup
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: frickelfix2.intern.netmedia.de
INFO: include disk 'virtio1' 'vmimages:9114/vm-9114-disk-1.qcow2' 16G
INFO: include disk 'virtio2' 'vmimages:9114/vm-9114-disk-2.qcow2' 15G
INFO: include disk 'virtio3' 'vmimages:9114/vm-9114-disk-3.qcow2' 32G
INFO: skip unused drive 'containers:vm-9114-disk-1' (not included into backup)
INFO: snapshots found (not included into backup)
INFO: creating archive '/mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_07-12_17_28.vma.gz'
INFO: starting kvm to execute backup task
INFO: started backup task '034c8592-6566-434c-851d-3d0bb9dfe7b9'
INFO: status: 0% (167247872/67645734912), sparse 0% (112992256), duration 3, read/write 55/18 MB/s
INFO: status: 1% (680394752/67645734912), sparse 0% (254701568), duration 23, read/write 25/18 MB/s
INFO: status: 2% (1360789504/67645734912), sparse 0% (464650240), duration 50, read/write 25/17 MB/s
INFO: status: 3% (2033582080/67645734912), sparse 1% (732831744), duration 77, read/write 24/14 MB/s
...
INFO: status: 98% (66309980160/67645734912), sparse 25% (17106046976), duration 2659, read/write 18/18 MB/s
INFO: status: 99% (66975170560/67645734912), sparse 25% (17106046976), duration 2696, read/write 17/17 MB/s
INFO: status: 100% (67645734912/67645734912), sparse 25% (17109598208), duration 2733, read/write 18/18 MB/s
INFO: transferred 67645 MB in 2733 seconds (24 MB/s)
INFO: stopping kvm after backup task
INFO: archive file size: 39.51GB
INFO: Finished Backup of VM 9114 (00:45:40)
INFO: Backup job finished successfully
TASK OK
restore vma archive: zcat /mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_07-12_17_28.vma.gz | vma extract -v -r /var/tmp/vzdumptmp16129.fifo - /var/tmp/vzdumptmp16129
CFG: size: 577 name: qemu-server.conf
DEV: dev_id=1 size: 17179869184 devname: drive-virtio1
DEV: dev_id=2 size: 16106127360 devname: drive-virtio2
DEV: dev_id=3 size: 34359738368 devname: drive-virtio3
CTIME: Tue Aug 7 12:17:32 2018
Formatting '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-1.qcow2', fmt=qcow2 size=17179869184 cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16
new volume ID is 'vmimages:9115/vm-9115-disk-1.qcow2'
map 'drive-virtio1' to '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-1.qcow2' (write zeros = 0)
Formatting '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-2.qcow2', fmt=qcow2 size=16106127360 cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16
new volume ID is 'vmimages:9115/vm-9115-disk-2.qcow2'
map 'drive-virtio2' to '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-2.qcow2' (write zeros = 0)
Formatting '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-3.qcow2', fmt=qcow2 size=34359738368 cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16
new volume ID is 'vmimages:9115/vm-9115-disk-3.qcow2'
map 'drive-virtio3' to '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-3.qcow2' (write zeros = 0)
progress 1% (read 676462592 bytes, duration 5 sec)
progress 2% (read 1352925184 bytes, duration 10 sec)
progress 3% (read 2029387776 bytes, duration 16 sec)
...
progress 98% (read 66292875264 bytes, duration 838 sec)
progress 99% (read 66969337856 bytes, duration 846 sec)
progress 100% (read 67645734912 bytes, duration 854 sec)
total bytes read 67645734912, sparse bytes 17109598208 (25.3%)
space reduction due to 4K zero blocks 0.988%
TASK OK

We are planning to upgrade the node to the latest pve version and to try again.

Best regards
yarick123
 
Just tried to make lzo backup of another big vm on a cifs storage. There is the same problem:
...
INFO: status: 100% (73014444032/73014444032), sparse 47% (34366181376), duration 541, read/write 34/33 MB/s
INFO: transferred 73014 MB in 541 seconds (134 MB/s)
Warning: unable to close filehandle GEN10127 properly: Input/output error at /usr/share/perl5/PVE/VZDump/QemuServer.pm line 504.
...
INFO: Backup job finished successfully
TASK OK

For small server lzo backups we did not see the problem.
 
Unfortunately the same problem exists in the latest version.

@PVE-team maybe it is better to qualify "unable to close filehandle GEN132 properly: Input/output error at /usr/share/perl5/PVE/VZDump/QemuServer.pm line 504." as an error and not as a warning?

Here are results.

backup:
...
INFO: status: 100% (67645734912/67645734912), sparse 25% (17098604544), duration 673, read/write 70/69 MB/s
INFO: transferred 67645 MB in 673 seconds (100 MB/s)
Warning: unable to close filehandle GEN132 properly: Input/output error at /usr/share/perl5/PVE/VZDump/QemuServer.pm line 504.
...
INFO: Backup job finished successfully
TASK OK

restore:
...
progress 98% (read 66292875264 bytes, duration 766 sec)
lzop: /mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_08-10_26_45.vma.lzo: Checksum error
** (process:6867): ERROR **: restore failed - short vma extent (2706944 < 3801600)
/bin/bash: line 1: 6866 Exit 1 lzop -d -c /mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_08-10_26_45.vma.lzo
...
TASK ERROR: command 'set -o pipefail && lzop -d -c /mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_08-10_26_45.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp6864.fifo - /var/tmp/vzdumptmp6864' failed: exit code 133

proxmox-ve: 5.2-2 (running kernel: 4.15.18-1-pve)
pve-manager: 5.2-6 (running version: 5.2-6/bcd5f008)
pve-kernel-4.15: 5.2-4
pve-kernel-4.15.18-1-pve: 4.15.18-17
pve-kernel-4.15.17-3-pve: 4.15.17-14
pve-kernel-4.15.17-2-pve: 4.15.17-10
pve-kernel-4.15.17-1-pve: 4.15.17-9
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-37
libpve-guest-common-perl: 2.0-17
libpve-http-server-perl: 2.0-9
libpve-storage-perl: 5.0-24
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-3
lxcfs: 3.0.0-1
novnc-pve: 1.0.0-2
proxmox-widget-toolkit: 1.0-19
pve-cluster: 5.0-29
pve-container: 2.0-24
pve-docs: 5.2-5
pve-firewall: 3.0-13
pve-firmware: 2.0-5
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.2-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-30
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3

INFO: starting new backup job: vzdump 9114 --storage netback --node prox5mox03 --remove 0 --compress lzo --mode stop
INFO: Starting Backup of VM 9114 (qemu)
INFO: status = stopped
INFO: update VM 9114: -lock backup
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: frickelfix2.intern.netmedia.de
INFO: include disk 'virtio1' 'vmimages:9114/vm-9114-disk-1.qcow2' 16G
INFO: include disk 'virtio2' 'vmimages:9114/vm-9114-disk-2.qcow2' 15G
INFO: include disk 'virtio3' 'vmimages:9114/vm-9114-disk-3.qcow2' 32G
INFO: skip unused drive 'containers:vm-9114-disk-1' (not included into backup)
INFO: snapshots found (not included into backup)
INFO: creating archive '/mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_08-10_26_45.vma.lzo'
INFO: starting kvm to execute backup task
INFO: started backup task '35c30384-6203-4c3c-ab6a-0589f0db2977'
INFO: status: 0% (379125760/67645734912), sparse 0% (157835264), duration 3, read/write 126/73 MB/s
INFO: status: 1% (691077120/67645734912), sparse 0% (254361600), duration 8, read/write 62/43 MB/s
INFO: status: 2% (1424031744/67645734912), sparse 0% (479653888), duration 15, read/write 104/72 MB/s
INFO: status: 3% (2094530560/67645734912), sparse 1% (735191040), duration 21, read/write 111/69 MB/s
...
INFO: status: 98% (66336129024/67645734912), sparse 25% (17095053312), duration 655, read/write 65/65 MB/s
INFO: status: 99% (67013181440/67645734912), sparse 25% (17095053312), duration 664, read/write 75/75 MB/s
INFO: status: 100% (67645734912/67645734912), sparse 25% (17098604544), duration 673, read/write 70/69 MB/s
INFO: transferred 67645 MB in 673 seconds (100 MB/s)
Warning: unable to close filehandle GEN132 properly: Input/output error at /usr/share/perl5/PVE/VZDump/QemuServer.pm line 504.
INFO: stopping kvm after backup task
INFO: archive file size: 40.66GB
INFO: Finished Backup of VM 9114 (00:12:03)
INFO: Backup job finished successfully
TASK OK
restore vma archive: lzop -d -c /mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_08-10_26_45.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp6864.fifo - /var/tmp/vzdumptmp6864
CFG: size: 577 name: qemu-server.conf
DEV: dev_id=1 size: 17179869184 devname: drive-virtio1
DEV: dev_id=2 size: 16106127360 devname: drive-virtio2
DEV: dev_id=3 size: 34359738368 devname: drive-virtio3
CTIME: Wed Aug 8 10:26:49 2018
Formatting '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-1.qcow2', fmt=qcow2 size=17179869184 cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16
new volume ID is 'vmimages:9115/vm-9115-disk-1.qcow2'
map 'drive-virtio1' to '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-1.qcow2' (write zeros = 0)
Formatting '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-2.qcow2', fmt=qcow2 size=16106127360 cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16
new volume ID is 'vmimages:9115/vm-9115-disk-2.qcow2'
map 'drive-virtio2' to '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-2.qcow2' (write zeros = 0)
Formatting '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-3.qcow2', fmt=qcow2 size=34359738368 cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16
new volume ID is 'vmimages:9115/vm-9115-disk-3.qcow2'
map 'drive-virtio3' to '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-3.qcow2' (write zeros = 0)
progress 1% (read 676462592 bytes, duration 2 sec)
progress 2% (read 1352925184 bytes, duration 6 sec)
progress 3% (read 2029387776 bytes, duration 9 sec)
...
progress 96% (read 64939950080 bytes, duration 735 sec)
progress 97% (read 65616412672 bytes, duration 743 sec)
progress 98% (read 66292875264 bytes, duration 766 sec)
lzop: /mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_08-10_26_45.vma.lzo: Checksum error

** (process:6867): ERROR **: restore failed - short vma extent (2706944 < 3801600)
/bin/bash: line 1: 6866 Exit 1 lzop -d -c /mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_08-10_26_45.vma.lzo
6867 Trace/breakpoint trap | vma extract -v -r /var/tmp/vzdumptmp6864.fifo - /var/tmp/vzdumptmp6864
temporary volume 'vmimages:9115/vm-9115-disk-1.qcow2' sucessfuly removed
temporary volume 'vmimages:9115/vm-9115-disk-3.qcow2' sucessfuly removed
temporary volume 'vmimages:9115/vm-9115-disk-2.qcow2' sucessfuly removed
no lock found trying to remove 'create' lock
TASK ERROR: command 'set -o pipefail && lzop -d -c /mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_08-10_26_45.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp6864.fifo - /var/tmp/vzdumptmp6864' failed: exit code 133
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!