Failed to extract config from VMA archive: zstd: error 70 : Write error : cannot write decoded block : Broken pipe (500)

werter

Well-Known Member
Dec 10, 2017
91
9
48
Hi.
I'm try to make 'Show configuration' on ANY of my VM backups (.zstd) and get 'Failed to extract config from VMA archive: zstd: error 70 : Write error : cannot write decoded block : Broken pipe (500)' . But if I'm try to restore VM - everything OK.

'Show configuration' with .lzo or .gz backups working good.

pve.png

pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
pve-kernel-5.4: 6.3-3
pve-kernel-helper: 6.3-3
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.78-1-pve: 5.4.78-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
ceph-fuse: 14.2.16-pve1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 8.0-2~bpo10+1
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksmtuned: 4.20150325+b1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-2
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.0-1
proxmox-backup-client: 1.0.6-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-cluster: 6.2-1
pve-container: 3.3-2
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-7
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1

Upd1. Try latest zstd version from https://github.com/facebook/zstd. The same error.
Code:
zstd -V
*** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
 
Last edited:
I'm try to make 'Show configuration' on ANY of my VM backups (.zstd) and get 'Failed to extract config from VMA archive: zstd: error 70 : Write error : cannot write decoded block : Broken pipe (500)' . But if I'm try to restore VM - everything OK.
Can you please post a pveversion -v?

Upd1. Try latest zstd version from https://github.com/facebook/zstd. The same error.
And please stick with the debian intalled version. That's also the one which we test on.
 
Can you please post a pveversion -v?
'pveversion -v' listing present in my previous post.

And please stick with the debian intalled version. That's also the one which we test on.

'Show configuration' didn't working with ANY version of zstd in my case.

P.s. What command line make 'Show configuration' ?
 
maybe your zstd file is broken?
try with 'zstd --check <FILE>'
 
I'll take the liberty and re-use this thread. Same problem (error message when trying to show configuration), zstd check says all is ok:

Code:
# zstd --check vzdump-qemu-113-2021_08_16-18_45_55.vma.zst
vzdump-qemu-113-2021_08_16-18_45_55.vma.zst : 99.55%   (5273936590 => 5250305290 bytes, vzdump-qemu-113-2021_08_16-18_45_55.vma.zst.zst)

Interestingly, I have now 2 files in /var/lib/vzdump:

Code:
-rw-r--r-- 1 root root 5273936590 Aug 16 22:32 vzdump-qemu-113-2021_08_16-18_45_55.vma.zst
-rw-r--r-- 1 root root 5250305290 Aug 16 22:32 vzdump-qemu-113-2021_08_16-18_45_55.vma.zst.zst

of which only the 1st one was put there by me. And - sigh - that evidently happens from the zstd --check operation. So checking on the source machine (Proxmox 6.4) ... "show configuration" works there (on the 5273936590 bytes long file), zstd --check also "works" (no error) but produces a
Code:
5250302644 Aug 16 18:47 vzdump-qemu-113-2021_08_16-18_45_55.vma.zst.zst




Target machine:
Code:
# pveversion -v
proxmox-ve: 7.0-2 (running kernel: 5.11.22-1-pve)
pve-manager: 7.0-8 (running version: 7.0-8/b1dbf562)
pve-kernel-5.11: 7.0-3
pve-kernel-helper: 7.0-3
pve-kernel-5.11.22-1-pve: 5.11.22-2
ceph-fuse: 15.2.13-pve1
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.0.0-1+pve5
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.1.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-4
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-7
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-2
lxcfs: 4.0.8-pve1
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.1-1
proxmox-backup-file-restore: 2.0.1-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.2-4
pve-cluster: 7.0-3
pve-container: 4.0-5
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.2-4
pve-ha-manager: 3.3-1
pve-i18n: 2.4-1
pve-qemu-kvm: 6.0.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-7
smartmontools: 7.2-1
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.4-pve1
 
Looks like dcsapak misread the documentation ;-)

Code:
$ zstd --help
(...)
--[no-]check : during compression, add XXH64 integrity checksum to frame (default: enabled). If specified with -d, decompressor will ignore/validate checksums in compressed frame (default: validate).
(...)

So the command you used just compressed the file, and additionally adds checksums (well, those are default anyway), it doesn't actually validate the input.

So to check the file AFAICT you have to decompress it by also adding the -d switch. If you do not want to write out the decompressed file, also pass -o /dev/null, so:
zstd -d --check -o /dev/null thefile.zst
 
  • Like
Reactions: MoxProxxer
So to check the file AFAICT you have to decompress it by also adding the -d switch. If you do not want to write out the decompressed file, also pass -o /dev/null, so:
zstd -d --check -o /dev/null thefile.zst

file seems ok on both source and target machine

Code:
# zstd -d --check -o /dev/null vzdump-qemu-113-2021_08_16-18_45_55.vma.zst
vzdump-qemu-113-2021_08_16-18_45_55.vma.zst: 9667888640 bytes

Show Info on PVE 6.4 works, on 7.0 error - see topic.

Restore fails completely

Code:
restore vma archive: zstd -q -d -c /var/lib/vz/dump/vzdump-qemu-113-2021_08_16-18_45_55.vma.zst | vma extract -v -r /var/tmp/vzdumptmp4148708.fifo - /var/tmp/vzdumptmp4148708
CFG: size: 601 name: qemu-server.conf
DEV: dev_id=1 size: 34359738368 devname: drive-sata0
CTIME: Mon Aug 16 18:45:55 2021
error before or during data restore, some or all disks were not completely restored. VM 101 state is NOT cleaned up.
TASK ERROR: command 'set -o pipefail && zstd -q -d -c /var/lib/vz/dump/vzdump-qemu-113-2021_08_16-18_45_55.vma.zst | vma extract -v -r /var/tmp/vzdumptmp4148708.fifo - /var/tmp/vzdumptmp4148708' failed: storage 'data' does not exist

Yeah, of course there is no "data" storage on the target machine, but that should IMHO not lead to such a complete failure.

End of story - so far - is a created VM that has nothing to do with the desired params.
6.4:
1629185600446.png

7.0:
1629185630674.png

:p
 
That looks like a different error than the thread title, though. You can just pass a storage name to the restore command to tell it to restore on an existing storage.
 
You are right, giving it a local different storage worked.

Ok, sort of. It transformed a qcow2 into a raw, but that's definitely a different issue than the topic of this thread, so feel free to carve that out from here.
 
So to the original topic, did anyone get this figured out? I'm getting the same error. I'm trying to keep my backups from filling up all the storage space.
 
Newb here, (so please excuse my total ignorance in these technologies) - with same issue.
Drive installed to is an old raid volume in an old HP Server G7.
I should mention that the raid drive is a single drive that came with the server.
Storage disk is scsi connected by usb, but get same error wherever backed up.
The Containers backup using tar.
VMs use zst.
I had success with a VM using gz.
Attached are some screenshots:
ofc- I am way, way over my head here with all this techie stuff...
Glad i can move onto the next slew of things I break, and start trying to use this awesome platform.

I also saw a timesync error show up, rebooted but still there.
I presume this wont mean much until I get another server running.
See attached SS:


Screenshot from 2021-09-03 02-01-02.pngScreenshot from 2021-09-03 02-24-50.pngScreenshot from 2021-09-02 21-45-35.png
 
Newb here, (so please excuse my total ignorance in these technologies) - with same issue.
Drive installed to is an old raid volume in an old HP Server G7.
I should mention that the raid drive is a single drive that came with the server.
Storage disk is scsi connected by usb, but get same error wherever backed up.
The Containers backup using tar.
VMs use zst.
I had success with a VM using gz.
Attached are some screenshots:
ofc- I am way, way over my head here with all this techie stuff...
Glad i can move onto the next slew of things I break, and start trying to use this awesome platform.

I also saw a timesync error show up, rebooted but still there.
I presume this wont mean much until I get another server running.
See attached SS:


View attachment 29181View attachment 29182View attachment 29183
Ensure you got the latest software updates, as there was a fix for the "show configuration" function for an error like this shortly after the initial 7.0 release.
libpve-storage-perl needs to be at least version 7.0-9, you can check with pveverison -v or the "Package Version" button in the "Node -> Summary" web-interface panel.
 
  • Like
Reactions: compuls1v3
Ensure you got the latest software updates, as there was a fix for the "show configuration" function for an error like this shortly after the initial 7.0 release.
libpve-storage-perl needs to be at least version 7.0-9, you can check with pveverison -v or the "Package Version" button in the "Node -> Summary" web-interface panel.
Thanks,
Installed less than 24hrs ago and updated immediately.
I did however create that iso usb about a week ago...,
and just clicked the "Update tab", says it is up to date.
Here is what posted:

libpve-storage-perl: 7.0-7

proxmox-ve: 7.0-2 (running kernel: 5.11.22-1-pve)
pve-manager: 7.0-8 (running version: 7.0-8/b1dbf562)
pve-kernel-5.11: 7.0-3
pve-kernel-helper: 7.0-3
 
Last edited:
Installed less than 24hrs ago and updated immediately.
I did however create that iso usb about a week ago...,
and just clicked the "Update tab", says it is up to date.
Here is what posted:

libpve-storage-perl: 7.0-7

proxmox-ve: 7.0-2 (running kernel: 5.11.22-1-pve)
pve-manager: 7.0-8 (running version: 7.0-8/b1dbf562)
pve-kernel-5.11: 7.0-3
pve-kernel-helper: 7.0-3
That's outdated, ensure you setup a subscription for the enterprise repository (Node -> Subscription) or configure a publicly available non-production repository (Node -> Repositories) and then check again for updates.
 
That's outdated, ensure you setup a subscription for the enterprise repository (Node -> Subscription) or configure a publicly available non-production repository (Node -> Repositories) and then check again for updates.
It will take some time till a subscription will be a reasonable move up.
Fist things first...

Where can I find the tutorial on how to set up the non-production repositories, that actually works.
 
Where can I find the tutorial on how to set up the non-production repositories, that actually works.
With Proxmox VE 7.0 you really do not need a tutorial, it has some repository management built-in into the web-interface.
Just select the Node and then the Repositories panel and then the Add button should help.

Still, for the record, the package repository documentation can be found at https://pve.proxmox.com/wiki/Package_Repositories
 
With Proxmox VE 7.0 you really do not need a tutorial, it has some repository management built-in into the web-interface.
Just select the Node and then the Repositories panel and then the Add button should help.

Still, for the record, the package repository documentation can be found at https://pve.proxmox.com/wiki/Package_Repositories
Hey thanks!

That is what i get for watching youtube! But then I'd be using something else by now.
Im no techie, deleting my root on my main drive proves it.
It will take me months to understand, and make enough advancement to get this somewhat functional for what will be required of it.
Some day in the future a subscription will be warranted.
:)
 
Last edited:
With Proxmox VE 7.0 you really do not need a tutorial, it has some repository management built-in into the web-interface.
Just select the Node and then the Repositories panel and then the Add button should help.

Still, for the record, the package repository documentation can be found at https://pve.proxmox.com/wiki/Package_Repositories
root@PVE:~# /etc/apt/sources.list
-bash: /etc/apt/sources.list: Permission denied


root@PVE:~# apt-get /etc/apt/sources.list
E: Invalid operation /etc/apt/sources.list

thanks to youtube:
cat
nano


My bad, i should have done it the easy way first.
IMO- the more info you give out, the more you will get...


Now, if could only wrap my head around all this LVM lvm-thin and all the other options for drives and why drives not showing their full capacity, explained by a non-techie. Let alone clusters!
Oh, and the easiest secure way for a Developer to gain access to the use of the server/platform itself from another location.
Nvidia Graphics card drivers, Not to forget about the network bridging etc...
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!