Timeout on restoring VM

Jan B

New Member
Dec 19, 2023
1
0
1
When restoring a VM using the WebUI, we get a timeout error after about 30 seconds:

Logfile of corresponding restore task:
Code:
root@pve:/var/log/pve/tasks/9# cat *125*
restore vma archive: zstd -q -d -c /mnt/pve/hetzner-storagebox/dump/vzdump-qemu-125-2023_11_20-19_31_07.vma.zst | vma e
xtract -v -r /var/tmp/vzdumptmp946688.fifo - /var/tmp/vzdumptmp946688
error before or during data restore, some or all disks were not completely restored. VM 125 state is NOT cleaned up.
TASK ERROR: command 'set -o pipefail && zstd -q -d -c /mnt/pve/hetzner-storagebox/dump/vzdump-qemu-125-2023_11_20-19_31
_07.vma.zst | vma extract -v -r /var/tmp/vzdumptmp946688.fifo - /var/tmp/vzdumptmp946688' failed: got timeout

The same error applies if using CLI for restore:

Code:
root@pve:~# qmrestore /mnt/pve/hetzner-storagebox/dump/vzdump-qemu-125-2023_11_20-19_31_07.vma.zst 666
restore vma archive: zstd -q -d -c /mnt/pve/hetzner-storagebox/dump/vzdump-qemu-125-2023_11_20-19_31_07.vma.zst | vma e
xtract -v -r /var/tmp/vzdumptmp522559.fifo - /var/tmp/vzdumptmp522559
error before or during data restore, some or all disks were not completely restored. VM 666 state is NOT cleaned up.
command 'set -o pipefail && zstd -q -d -c /mnt/pve/hetzner-storagebox/dump/vzdump-qemu-125-2023_11_20-19_31_07.vma.zst
| vma extract -v -r /var/tmp/vzdumptmp522559.fifo - /var/tmp/vzdumptmp522559' failed: got timeout

If we copy our backup from the remote backup storage to the local host, the data transfer takes about a minute:

Code:
root@pve:~# time cp /mnt/pve/hetzner-storagebox/dump/vzdump-qemu-123-2023_09_29-16_41_20.vma.zst ./
real    1m1.282s
user    0m0.009s
sys     0m2.133s

A CLI restore from the local copy works without any problems:

Code:
root@pve:~# qmrestore ./vzdump-qemu-123-2023_09_29-16_41_20.vma.zst 127
restore vma archive: zstd -q -d -c /root/vzdump-qemu-123-2023_09_29-16_41_20.vma.zst | vma extract -v -r /var/tmp/vzdumptmp329770.fifo - /var/tmp/vzdumptmp329770
CFG: size: 566 name: qemu-server.conf
DEV: dev_id=1 size: 540672 devname: drive-efidisk0
DEV: dev_id=2 size: 34359738368 devname: drive-scsi0
CTIME: Fri Sep 29 16:41:21 2023
Formatting '/var/lib/vz/images/127/vm-127-disk-0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=540672 lazy_refcounts=off refcount_bits=16
new volume ID is 'local:127/vm-127-disk-0.qcow2'
Formatting '/var/lib/vz/images/127/vm-127-disk-1.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=34359738368 lazy_refcounts=off refcount_bits=16
new volume ID is 'local:127/vm-127-disk-1.qcow2'
map 'drive-efidisk0' to '/var/lib/vz/images/127/vm-127-disk-0.qcow2' (write zeros = 0)
map 'drive-scsi0' to '/var/lib/vz/images/127/vm-127-disk-1.qcow2' (write zeros = 0)
progress 1% (read 343605248 bytes, duration 0 sec)
progress 2% (read 687210496 bytes, duration 0 sec)
...
progress 99% (read 34016722944 bytes, duration 21 sec)
progress 100% (read 34360262656 bytes, duration 21 sec)
total bytes read 34360328192, sparse bytes 23411388416 (68.1%)
space reduction due to 4K zero blocks 4.03%
rescan volumes...
root@pve:~#

Any suggestion how to solve the timeout issue? I've found several threads in this forum with similar errors, but didn't find a solution so far (except the manual transfer and restore as shown).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!