Restoring from backup results in "got timeout"

The bug is already fixed, and I will update the repository next week.

Exactly solved the problem? I ran into the same problem. When repairing the matter gzip, lzo or compressed vma keep getting these errors here:

command 'vma extract -v -r /var/tmp/vzdumptmp232758.fifo /var/lib/vz/on-13/dump/vzdump-qemu-114-2013_11_08-02_35_28.vma /var/tmp/vzdumptmp232758' failed: got timeout
command 'vma extract -v -r /var/tmp/vzdumptmp232768.fifo /var/lib/vz/on-13/dump/vzdump-qemu-112-2013_11_08-02_20_21.vma /var/tmp/vzdumptmp232768' failed: got timeout
command 'vma extract -v -r /var/tmp/vzdumptmp232784.fifo /var/lib/vz/on-13/dump/vzdump-qemu-111-2013_11_08-02_10_38.vma /var/tmp/vzdumptmp232784' failed: got timeout
command 'vma extract -v -r /var/tmp/vzdumptmp232800.fifo /var/lib/vz/on-13/dump/vzdump-qemu-105-2013_11_08-00_54_07.vma /var/tmp/vzdumptmp232800' failed: got timeout
command 'vma extract -v -r /var/tmp/vzdumptmp232828.fifo /var/lib/vz/on-13/dump/vzdump-qemu-113-2013_11_08-02_28_05.vma /var/tmp/vzdumptmp232828' failed: got timeout
command 'vma extract -v -r /var/tmp/vzdumptmp232844.fifo /var/lib/vz/on-13/dump/vzdump-qemu-109-2013_11_08-01_48_34.vma /var/tmp/vzdumptmp232844' failed: got timeout
command 'vma extract -v -r /var/tmp/vzdumptmp232869.fifo /var/lib/vz/on-13/dump/vzdump-qemu-106-2013_11_08-01_15_36.vma /var/tmp/vzdumptmp232869' failed: got timeout
command 'vma extract -v -r /var/tmp/vzdumptmp232903.fifo /var/lib/vz/on-13/dump/vzdump-qemu-110-2013_11_08-02_04_26.vma /var/tmp/vzdumptmp232903' failed: got timeout
command 'vma extract -v -r /var/tmp/vzdumptmp233463.fifo /var/lib/vz/on-13/dump/vzdump-qemu-104-2013_11_08-00_36_53.vma /var/tmp/vzdumptmp233463' failed: got timeout
command 'vma extract -v -r /var/tmp/vzdumptmp233482.fifo /var/lib/vz/on-13/dump/vzdump-qemu-117-2013_11_08-02_38_04.vma /var/tmp/vzdumptmp233482' failed: got timeout

Although updated to version 3.1, and after made ​​aptitude-y update && aptitude-y full-upgrade:

pveversion -v
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-21 (running version: 3.1-21/93bf03d4)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-26-pve: 2.6.32-114
pve-kernel-2.6.32-18-pve: 2.6.32-88
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-8
libpve-access-control: 3.0-7
libpve-storage-perl: 3.0-17
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-4
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1
 
Exactly solved the problem? I ran into the same problem. When repairing the matter gzip, lzo or compressed vma keep getting these errors here:



Although updated to version 3.1, and after made ​​aptitude-y update && aptitude-y full-upgrade:

pveversion -v
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-21 (running version: 3.1-21/93bf03d4)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-26-pve: 2.6.32-114
pve-kernel-2.6.32-18-pve: 2.6.32-88
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-8
libpve-access-control: 3.0-7
libpve-storage-perl: 3.0-17
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-4
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1


Same problem here :-( Do more people have this problem ? The backup came from a 2.3-13 version :confused: Try to restore to version: Proxmox version 3.1-21
 
Last edited:
Same problem here :-( Do more people have this problem ? The backup came from a 2.3-13 version :confused: Try to restore to version: Proxmox version 3.1-21

Similar problem here, backing up from a 2.3-13 server and trying to restore on a 3.2-1. Additionally, my problem seems limited to the VMs using qcow2 format hard drives, as I successfully migrated about 15 other VMs that were using raw.
 
Similar problem here, backing up from a 2.3-13 server and trying to restore on a 3.2-1. Additionally, my problem seems limited to the VMs using qcow2 format hard drives, as I successfully migrated about 15 other VMs that were using raw.

Is this a bug ?
 
I'm having the same problem trying to restore a QM backup with a .lzo extension. is there a fix for this? I need to restore the backup :-(
 
I'm having the same problem trying to restore a QM backup with a .lzo extension. is there a fix for this? I need to restore the backup :-(

This was about restoring a KVM right ? I believe as a work around it did made next steps ...

Copy /etc/pve/nodes/host235/qemu-server your_KVM_number.conf

Example 100.config to location /etc/pve/nodes/host235/qemu-server

Little while ago i did this. I believe it give a write error. You must stop some services of PVE.
Or make a VM same name 100.config Edit this file later from what is in your backup.

Copy your VM (Disks) to /home/images and then it should work. While ago, i remember this was my workaround.
Hope you can fix lit like this :) Good luck !!!!
 
This bug is still persistent in Proxmox VE 6.

Code:
qmrestore vzdump-qemu-102_2020-07-28-14_54_58.vma.zst 100
restore vma archive: zstd -q -d -c /var/lib/vz/dump/vzdump-qemu-102_2020-07-28-14_54_58.vma.zst | vma extract -v -r /var/tmp/vzdumptmp30350.fifo - /var/tmp/vzdumptmp30350
command 'set -o pipefail && zstd -q -d -c /var/lib/vz/dump/vzdump-qemu-102_2020-07-28-14_54_58.vma.zst | vma extract -v -r /var/tmp/vzdumptmp30350.fifo - /var/tmp/vzdumptmp30350' failed: got timeout

Please fix it!
 
Hello,

same problem now here with a 6tb disk:

Code:
applying read rate limit: 102400
restore vma archive: cstream -t 104857600 -- /var/lib/vz/dump/vzdump-qemu-140-2023_04_06-10_06_36.vma.zst | zstd -q -d -c - | vma extract -v -r /var/tmp/vzdumptmp881017.fifo - /var/tmp/vzdumptmp881017
CFG: size: 501 name: qemu-server.conf
DEV: dev_id=1 size: 6442450944000 devname: drive-virtio0
CTIME: Thu Apr  6 10:07:12 2023
rate limit for storage local: 102400 KiB/s
Formatting '/var/lib/vz/images/140/vm-140-disk-0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=6442450944000 lazy_refcounts=off refcount_bits=16
no lock found trying to remove 'create'  lock
error before or during data restore, some or all disks were not completely restored. VM 140 state is NOT cleaned up.
TASK ERROR: command 'set -o pipefail && cstream -t 104857600 -- /var/lib/vz/dump/vzdump-qemu-140-2023_04_06-10_06_36.vma.zst | zstd -q -d -c - | vma extract -v -r /var/tmp/vzdumptmp881017.fifo - /var/tmp/vzdumptmp881017' failed: command '/usr/bin/qemu-img create -o 'preallocation=metadata' -f qcow2 /var/lib/vz/images/140/vm-140-disk-0.qcow2 6291456000K' failed: got timeout

The restore fails at 10 min. exactly.


All packages where up to date when i backed up and restored on a freshly server.

pls help.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!