Hello,
we have a 3 node PVE 5.2-5 cluster. There is a vm, which lzo backups (CIFS storage) cannot be restored. The error message is:
Interesting is, that there is a suspicious warning during the backup, but the backup is qualified as succeeded:
There are no such problems with gzip-backups of the same machine. The problem is now 100% reproducible. We have tried to backup/restore as minimum 4 times.
Lzo backup/restore succeeds, if a local backup storage is used.
@PVE-team maybe it is better to qualify "unable to close filehandle GEN664 properly: Input/output error at /usr/share/perl5/PVE/VZDump/QemuServer.pm line 504." as an error and not as a warning?
P.S. All backups were made after shutting down the vm.
We are planning to upgrade the node to the latest pve version and to try again.
Best regards
yarick123
we have a 3 node PVE 5.2-5 cluster. There is a vm, which lzo backups (CIFS storage) cannot be restored. The error message is:
...
progress 55% (read 37205180416 bytes, duration 388 sec)
lzop: /mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_07-11_44_43.vma.lzo: Checksum error
** (process:6537): ERROR **: restore failed - short vma extent (3646464 < 3801600)
/bin/bash: line 1: 6536 Exit 1 lzop -d -c /mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_07-11_44_43.vma.lzo
...
failed: exit code 133
Interesting is, that there is a suspicious warning during the backup, but the backup is qualified as succeeded:
...
INFO: status: 100% (67645734912/67645734912), sparse 25% (17109598208), duration 846, read/write 66/65 MB/s
INFO: transferred 67645 MB in 846 seconds (79 MB/s)
Warning: unable to close filehandle GEN664 properly: Input/output error at /usr/share/perl5/PVE/VZDump/QemuServer.pm line 504.
...
INFO: Backup job finished successfully
TASK OK
There are no such problems with gzip-backups of the same machine. The problem is now 100% reproducible. We have tried to backup/restore as minimum 4 times.
Lzo backup/restore succeeds, if a local backup storage is used.
@PVE-team maybe it is better to qualify "unable to close filehandle GEN664 properly: Input/output error at /usr/share/perl5/PVE/VZDump/QemuServer.pm line 504." as an error and not as a warning?
P.S. All backups were made after shutting down the vm.
proxmox-ve: 5.2-2 (running kernel: 4.15.17-3-pve)
pve-manager: 5.2-5 (running version: 5.2-5/eb24855a)
pve-kernel-4.15: 5.2-3
pve-kernel-4.15.17-3-pve: 4.15.17-14
pve-kernel-4.15.17-2-pve: 4.15.17-10
pve-kernel-4.15.17-1-pve: 4.15.17-9
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-35
libpve-guest-common-perl: 2.0-17
libpve-http-server-perl: 2.0-9
libpve-storage-perl: 5.0-23
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-3
lxcfs: 3.0.0-1
novnc-pve: 1.0.0-1
proxmox-widget-toolkit: 1.0-19
pve-cluster: 5.0-27
pve-container: 2.0-24
pve-docs: 5.2-4
pve-firewall: 3.0-12
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-5
qemu-server: 5.0-29
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
pve-manager: 5.2-5 (running version: 5.2-5/eb24855a)
pve-kernel-4.15: 5.2-3
pve-kernel-4.15.17-3-pve: 4.15.17-14
pve-kernel-4.15.17-2-pve: 4.15.17-10
pve-kernel-4.15.17-1-pve: 4.15.17-9
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-35
libpve-guest-common-perl: 2.0-17
libpve-http-server-perl: 2.0-9
libpve-storage-perl: 5.0-23
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-3
lxcfs: 3.0.0-1
novnc-pve: 1.0.0-1
proxmox-widget-toolkit: 1.0-19
pve-cluster: 5.0-27
pve-container: 2.0-24
pve-docs: 5.2-4
pve-firewall: 3.0-12
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-5
qemu-server: 5.0-29
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
INFO: starting new backup job: vzdump 9114 --mode stop --remove 0 --compress lzo --storage netback --node prox5mox03
INFO: Starting Backup of VM 9114 (qemu)
INFO: status = stopped
INFO: update VM 9114: -lock backup
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: frickelfix2.intern.netmedia.de
INFO: include disk 'virtio1' 'vmimages:9114/vm-9114-disk-1.qcow2' 16G
INFO: include disk 'virtio2' 'vmimages:9114/vm-9114-disk-2.qcow2' 15G
INFO: include disk 'virtio3' 'vmimages:9114/vm-9114-disk-3.qcow2' 32G
INFO: skip unused drive 'containers:vm-9114-disk-1' (not included into backup)
INFO: snapshots found (not included into backup)
INFO: creating archive '/mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_07-11_44_43.vma.lzo'
INFO: starting kvm to execute backup task
INFO: started backup task '4f945329-0849-479b-a1ba-6507301fd7fb'
INFO: status: 0% (345243648/67645734912), sparse 0% (155860992), duration 3, read/write 115/63 MB/s
INFO: status: 1% (682688512/67645734912), sparse 0% (254701568), duration 8, read/write 67/47 MB/s
INFO: status: 2% (1445593088/67645734912), sparse 0% (481198080), duration 17, read/write 84/59 MB/s
INFO: status: 3% (2075394048/67645734912), sparse 1% (734281728), duration 23, read/write 104/62 MB/s
...
INFO: status: 98% (66304737280/67645734912), sparse 25% (17106046976), duration 827, read/write 45/45 MB/s
INFO: status: 99% (66983624704/67645734912), sparse 25% (17106046976), duration 836, read/write 75/75 MB/s
INFO: status: 100% (67645734912/67645734912), sparse 25% (17109598208), duration 846, read/write 66/65 MB/s
INFO: transferred 67645 MB in 846 seconds (79 MB/s)
Warning: unable to close filehandle GEN664 properly: Input/output error at /usr/share/perl5/PVE/VZDump/QemuServer.pm line 504.
INFO: stopping kvm after backup task
INFO: archive file size: 40.66GB
INFO: Finished Backup of VM 9114 (00:15:10)
INFO: Backup job finished successfully
TASK OK
INFO: Starting Backup of VM 9114 (qemu)
INFO: status = stopped
INFO: update VM 9114: -lock backup
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: frickelfix2.intern.netmedia.de
INFO: include disk 'virtio1' 'vmimages:9114/vm-9114-disk-1.qcow2' 16G
INFO: include disk 'virtio2' 'vmimages:9114/vm-9114-disk-2.qcow2' 15G
INFO: include disk 'virtio3' 'vmimages:9114/vm-9114-disk-3.qcow2' 32G
INFO: skip unused drive 'containers:vm-9114-disk-1' (not included into backup)
INFO: snapshots found (not included into backup)
INFO: creating archive '/mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_07-11_44_43.vma.lzo'
INFO: starting kvm to execute backup task
INFO: started backup task '4f945329-0849-479b-a1ba-6507301fd7fb'
INFO: status: 0% (345243648/67645734912), sparse 0% (155860992), duration 3, read/write 115/63 MB/s
INFO: status: 1% (682688512/67645734912), sparse 0% (254701568), duration 8, read/write 67/47 MB/s
INFO: status: 2% (1445593088/67645734912), sparse 0% (481198080), duration 17, read/write 84/59 MB/s
INFO: status: 3% (2075394048/67645734912), sparse 1% (734281728), duration 23, read/write 104/62 MB/s
...
INFO: status: 98% (66304737280/67645734912), sparse 25% (17106046976), duration 827, read/write 45/45 MB/s
INFO: status: 99% (66983624704/67645734912), sparse 25% (17106046976), duration 836, read/write 75/75 MB/s
INFO: status: 100% (67645734912/67645734912), sparse 25% (17109598208), duration 846, read/write 66/65 MB/s
INFO: transferred 67645 MB in 846 seconds (79 MB/s)
Warning: unable to close filehandle GEN664 properly: Input/output error at /usr/share/perl5/PVE/VZDump/QemuServer.pm line 504.
INFO: stopping kvm after backup task
INFO: archive file size: 40.66GB
INFO: Finished Backup of VM 9114 (00:15:10)
INFO: Backup job finished successfully
TASK OK
restore vma archive: lzop -d -c /mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_07-11_44_43.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp6534.fifo - /var/tmp/vzdumptmp6534
CFG: size: 577 name: qemu-server.conf
DEV: dev_id=1 size: 17179869184 devname: drive-virtio1
DEV: dev_id=2 size: 16106127360 devname: drive-virtio2
DEV: dev_id=3 size: 34359738368 devname: drive-virtio3
CTIME: Tue Aug 7 11:44:47 2018
Formatting '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-1.qcow2', fmt=qcow2 size=17179869184 cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16
new volume ID is 'vmimages:9115/vm-9115-disk-1.qcow2'
map 'drive-virtio1' to '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-1.qcow2' (write zeros = 0)
Formatting '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-2.qcow2', fmt=qcow2 size=16106127360 cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16
new volume ID is 'vmimages:9115/vm-9115-disk-2.qcow2'
map 'drive-virtio2' to '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-2.qcow2' (write zeros = 0)
Formatting '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-3.qcow2', fmt=qcow2 size=34359738368 cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16
new volume ID is 'vmimages:9115/vm-9115-disk-3.qcow2'
map 'drive-virtio3' to '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-3.qcow2' (write zeros = 0)
progress 1% (read 676462592 bytes, duration 3 sec)
progress 2% (read 1352925184 bytes, duration 9 sec)
progress 3% (read 2029387776 bytes, duration 12 sec)
...
progress 53% (read 35852255232 bytes, duration 370 sec)
progress 54% (read 36528717824 bytes, duration 382 sec)
progress 55% (read 37205180416 bytes, duration 388 sec)
lzop: /mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_07-11_44_43.vma.lzo: Checksum error
** (process:6537): ERROR **: restore failed - short vma extent (3646464 < 3801600)
/bin/bash: line 1: 6536 Exit 1 lzop -d -c /mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_07-11_44_43.vma.lzo
6537 Trace/breakpoint trap | vma extract -v -r /var/tmp/vzdumptmp6534.fifo - /var/tmp/vzdumptmp6534
temporary volume 'vmimages:9115/vm-9115-disk-1.qcow2' sucessfuly removed
temporary volume 'vmimages:9115/vm-9115-disk-3.qcow2' sucessfuly removed
temporary volume 'vmimages:9115/vm-9115-disk-2.qcow2' sucessfuly removed
no lock found trying to remove 'create' lock
TASK ERROR: command 'set -o pipefail && lzop -d -c /mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_07-11_44_43.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp6534.fifo - /var/tmp/vzdumptmp6534' failed: exit code 133
CFG: size: 577 name: qemu-server.conf
DEV: dev_id=1 size: 17179869184 devname: drive-virtio1
DEV: dev_id=2 size: 16106127360 devname: drive-virtio2
DEV: dev_id=3 size: 34359738368 devname: drive-virtio3
CTIME: Tue Aug 7 11:44:47 2018
Formatting '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-1.qcow2', fmt=qcow2 size=17179869184 cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16
new volume ID is 'vmimages:9115/vm-9115-disk-1.qcow2'
map 'drive-virtio1' to '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-1.qcow2' (write zeros = 0)
Formatting '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-2.qcow2', fmt=qcow2 size=16106127360 cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16
new volume ID is 'vmimages:9115/vm-9115-disk-2.qcow2'
map 'drive-virtio2' to '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-2.qcow2' (write zeros = 0)
Formatting '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-3.qcow2', fmt=qcow2 size=34359738368 cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16
new volume ID is 'vmimages:9115/vm-9115-disk-3.qcow2'
map 'drive-virtio3' to '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-3.qcow2' (write zeros = 0)
progress 1% (read 676462592 bytes, duration 3 sec)
progress 2% (read 1352925184 bytes, duration 9 sec)
progress 3% (read 2029387776 bytes, duration 12 sec)
...
progress 53% (read 35852255232 bytes, duration 370 sec)
progress 54% (read 36528717824 bytes, duration 382 sec)
progress 55% (read 37205180416 bytes, duration 388 sec)
lzop: /mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_07-11_44_43.vma.lzo: Checksum error
** (process:6537): ERROR **: restore failed - short vma extent (3646464 < 3801600)
/bin/bash: line 1: 6536 Exit 1 lzop -d -c /mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_07-11_44_43.vma.lzo
6537 Trace/breakpoint trap | vma extract -v -r /var/tmp/vzdumptmp6534.fifo - /var/tmp/vzdumptmp6534
temporary volume 'vmimages:9115/vm-9115-disk-1.qcow2' sucessfuly removed
temporary volume 'vmimages:9115/vm-9115-disk-3.qcow2' sucessfuly removed
temporary volume 'vmimages:9115/vm-9115-disk-2.qcow2' sucessfuly removed
no lock found trying to remove 'create' lock
TASK ERROR: command 'set -o pipefail && lzop -d -c /mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_07-11_44_43.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp6534.fifo - /var/tmp/vzdumptmp6534' failed: exit code 133
INFO: starting new backup job: vzdump 9114 --mode stop --storage netback --node prox5mox03 --compress gzip --remove 0
INFO: Starting Backup of VM 9114 (qemu)
INFO: status = stopped
INFO: update VM 9114: -lock backup
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: frickelfix2.intern.netmedia.de
INFO: include disk 'virtio1' 'vmimages:9114/vm-9114-disk-1.qcow2' 16G
INFO: include disk 'virtio2' 'vmimages:9114/vm-9114-disk-2.qcow2' 15G
INFO: include disk 'virtio3' 'vmimages:9114/vm-9114-disk-3.qcow2' 32G
INFO: skip unused drive 'containers:vm-9114-disk-1' (not included into backup)
INFO: snapshots found (not included into backup)
INFO: creating archive '/mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_07-12_17_28.vma.gz'
INFO: starting kvm to execute backup task
INFO: started backup task '034c8592-6566-434c-851d-3d0bb9dfe7b9'
INFO: status: 0% (167247872/67645734912), sparse 0% (112992256), duration 3, read/write 55/18 MB/s
INFO: status: 1% (680394752/67645734912), sparse 0% (254701568), duration 23, read/write 25/18 MB/s
INFO: status: 2% (1360789504/67645734912), sparse 0% (464650240), duration 50, read/write 25/17 MB/s
INFO: status: 3% (2033582080/67645734912), sparse 1% (732831744), duration 77, read/write 24/14 MB/s
...
INFO: status: 98% (66309980160/67645734912), sparse 25% (17106046976), duration 2659, read/write 18/18 MB/s
INFO: status: 99% (66975170560/67645734912), sparse 25% (17106046976), duration 2696, read/write 17/17 MB/s
INFO: status: 100% (67645734912/67645734912), sparse 25% (17109598208), duration 2733, read/write 18/18 MB/s
INFO: transferred 67645 MB in 2733 seconds (24 MB/s)
INFO: stopping kvm after backup task
INFO: archive file size: 39.51GB
INFO: Finished Backup of VM 9114 (00:45:40)
INFO: Backup job finished successfully
TASK OK
INFO: Starting Backup of VM 9114 (qemu)
INFO: status = stopped
INFO: update VM 9114: -lock backup
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: frickelfix2.intern.netmedia.de
INFO: include disk 'virtio1' 'vmimages:9114/vm-9114-disk-1.qcow2' 16G
INFO: include disk 'virtio2' 'vmimages:9114/vm-9114-disk-2.qcow2' 15G
INFO: include disk 'virtio3' 'vmimages:9114/vm-9114-disk-3.qcow2' 32G
INFO: skip unused drive 'containers:vm-9114-disk-1' (not included into backup)
INFO: snapshots found (not included into backup)
INFO: creating archive '/mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_07-12_17_28.vma.gz'
INFO: starting kvm to execute backup task
INFO: started backup task '034c8592-6566-434c-851d-3d0bb9dfe7b9'
INFO: status: 0% (167247872/67645734912), sparse 0% (112992256), duration 3, read/write 55/18 MB/s
INFO: status: 1% (680394752/67645734912), sparse 0% (254701568), duration 23, read/write 25/18 MB/s
INFO: status: 2% (1360789504/67645734912), sparse 0% (464650240), duration 50, read/write 25/17 MB/s
INFO: status: 3% (2033582080/67645734912), sparse 1% (732831744), duration 77, read/write 24/14 MB/s
...
INFO: status: 98% (66309980160/67645734912), sparse 25% (17106046976), duration 2659, read/write 18/18 MB/s
INFO: status: 99% (66975170560/67645734912), sparse 25% (17106046976), duration 2696, read/write 17/17 MB/s
INFO: status: 100% (67645734912/67645734912), sparse 25% (17109598208), duration 2733, read/write 18/18 MB/s
INFO: transferred 67645 MB in 2733 seconds (24 MB/s)
INFO: stopping kvm after backup task
INFO: archive file size: 39.51GB
INFO: Finished Backup of VM 9114 (00:45:40)
INFO: Backup job finished successfully
TASK OK
restore vma archive: zcat /mnt/pve/netback/dump/vzdump-qemu-9114-2018_08_07-12_17_28.vma.gz | vma extract -v -r /var/tmp/vzdumptmp16129.fifo - /var/tmp/vzdumptmp16129
CFG: size: 577 name: qemu-server.conf
DEV: dev_id=1 size: 17179869184 devname: drive-virtio1
DEV: dev_id=2 size: 16106127360 devname: drive-virtio2
DEV: dev_id=3 size: 34359738368 devname: drive-virtio3
CTIME: Tue Aug 7 12:17:32 2018
Formatting '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-1.qcow2', fmt=qcow2 size=17179869184 cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16
new volume ID is 'vmimages:9115/vm-9115-disk-1.qcow2'
map 'drive-virtio1' to '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-1.qcow2' (write zeros = 0)
Formatting '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-2.qcow2', fmt=qcow2 size=16106127360 cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16
new volume ID is 'vmimages:9115/vm-9115-disk-2.qcow2'
map 'drive-virtio2' to '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-2.qcow2' (write zeros = 0)
Formatting '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-3.qcow2', fmt=qcow2 size=34359738368 cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16
new volume ID is 'vmimages:9115/vm-9115-disk-3.qcow2'
map 'drive-virtio3' to '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-3.qcow2' (write zeros = 0)
progress 1% (read 676462592 bytes, duration 5 sec)
progress 2% (read 1352925184 bytes, duration 10 sec)
progress 3% (read 2029387776 bytes, duration 16 sec)
...
progress 98% (read 66292875264 bytes, duration 838 sec)
progress 99% (read 66969337856 bytes, duration 846 sec)
progress 100% (read 67645734912 bytes, duration 854 sec)
total bytes read 67645734912, sparse bytes 17109598208 (25.3%)
space reduction due to 4K zero blocks 0.988%
TASK OK
CFG: size: 577 name: qemu-server.conf
DEV: dev_id=1 size: 17179869184 devname: drive-virtio1
DEV: dev_id=2 size: 16106127360 devname: drive-virtio2
DEV: dev_id=3 size: 34359738368 devname: drive-virtio3
CTIME: Tue Aug 7 12:17:32 2018
Formatting '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-1.qcow2', fmt=qcow2 size=17179869184 cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16
new volume ID is 'vmimages:9115/vm-9115-disk-1.qcow2'
map 'drive-virtio1' to '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-1.qcow2' (write zeros = 0)
Formatting '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-2.qcow2', fmt=qcow2 size=16106127360 cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16
new volume ID is 'vmimages:9115/vm-9115-disk-2.qcow2'
map 'drive-virtio2' to '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-2.qcow2' (write zeros = 0)
Formatting '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-3.qcow2', fmt=qcow2 size=34359738368 cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16
new volume ID is 'vmimages:9115/vm-9115-disk-3.qcow2'
map 'drive-virtio3' to '/srv/pve/storages/lv_vmimages/images/9115/vm-9115-disk-3.qcow2' (write zeros = 0)
progress 1% (read 676462592 bytes, duration 5 sec)
progress 2% (read 1352925184 bytes, duration 10 sec)
progress 3% (read 2029387776 bytes, duration 16 sec)
...
progress 98% (read 66292875264 bytes, duration 838 sec)
progress 99% (read 66969337856 bytes, duration 846 sec)
progress 100% (read 67645734912 bytes, duration 854 sec)
total bytes read 67645734912, sparse bytes 17109598208 (25.3%)
space reduction due to 4K zero blocks 0.988%
TASK OK
We are planning to upgrade the node to the latest pve version and to try again.
Best regards
yarick123