Restore just after a backup is failing

rabol

New Member
Jun 20, 2025
23
0
1
Hi

I want to replace a pve host, so I do a backup of the VM, all is good:


Code:
INFO: starting new backup job: vzdump 100 --remove 0 --storage pbs-hdd --node pve2 --notification-mode notification-system --mode stop --notes-template '{{guestname}}'
INFO: Starting Backup of VM 100 (qemu)
INFO: Backup started at 2025-09-19 18:48:17
INFO: status = running
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: pg-host
INFO: include disk 'scsi0' 'local-lvm:vm-100-disk-0' 32G
INFO: include disk 'scsi1' 'local-lvm:vm-100-disk-1' 96G
INFO: stopping virtual guest
INFO: creating Proxmox Backup Server archive 'vm/100/2025-09-19T16:48:17Z'
INFO: starting kvm to execute backup task
INFO: started backup task '579787a4-2a70-4f05-801f-3ccab0e4b9d4'
INFO: resuming VM again after 2 seconds
INFO: scsi0: dirty-bitmap status: created new
INFO: scsi1: dirty-bitmap status: created new
INFO:  19% (24.6 GiB of 128.0 GiB) in 3s, read: 8.2 GiB/s, write: 4.0 MiB/s
INFO:  37% (47.8 GiB of 128.0 GiB) in 6s, read: 7.7 GiB/s, write: 0 B/s
INFO:  54% (70.1 GiB of 128.0 GiB) in 9s, read: 7.4 GiB/s, write: 16.0 MiB/s
INFO:  73% (93.6 GiB of 128.0 GiB) in 12s, read: 7.8 GiB/s, write: 10.7 MiB/s
INFO:  81% (104.7 GiB of 128.0 GiB) in 15s, read: 3.7 GiB/s, write: 38.7 MiB/s
INFO:  86% (110.9 GiB of 128.0 GiB) in 18s, read: 2.1 GiB/s, write: 12.0 MiB/s
INFO: 100% (128.0 GiB of 128.0 GiB) in 21s, read: 5.7 GiB/s, write: 1.3 MiB/s
INFO: backup is sparse: 115.18 GiB (89%) total zero data
INFO: backup was done incrementally, reused 127.76 GiB (99%)
INFO: transferred 128.00 GiB in 21 seconds (6.1 GiB/s)
INFO: adding notes to backup
INFO: Finished Backup of VM 100 (00:00:24)
INFO: Backup finished at 2025-09-19 18:48:41
INFO: Backup job finished successfully
INFO: notified via target `mail-to-root`
TASK OK

Then on the new pve host I try to restore:

Code:
new volume ID is 'tank_1:vm-100-disk-0'
new volume ID is 'tank_1:vm-100-disk-1'
restore proxmox backup image: /usr/bin/pbs-restore --repository root@pam@192.168.1.9:tank_hdd_2tb vm/100/2025-09-19T16:48:17Z drive-scsi0.img.fidx /dev/zvol/tank_1/vm-100-disk-0 --verbose --format raw --skip-zero
connecting to repository 'root@pam@192.168.1.9:tank_hdd_2tb'
using up to 4 threads
open block backend for target '/dev/zvol/tank_1/vm-100-disk-0'
starting to restore snapshot 'vm/100/2025-09-19T16:48:17Z'
download and verify backup index
fetching up to 16 chunks in parallel
progress 1% (read 343932928 bytes, zeroes = 36% (125829120 bytes), duration 0 sec)
progress 2% (read 687865856 bytes, zeroes = 52% (360710144 bytes), duration 0 sec)
progress 3% (read 1031798784 bytes, zeroes = 68% (704643072 bytes), duration 0 sec)
progress 4% (read 1375731712 bytes, zeroes = 75% (1044381696 bytes), duration 0 sec)
progress 5% (read 1719664640 bytes, zeroes = 80% (1388314624 bytes), duration 0 sec)
progress 6% (read 2063597568 bytes, zeroes = 83% (1732247552 bytes), duration 0 sec)
progress 7% (read 2407530496 bytes, zeroes = 74% (1782579200 bytes), duration 1 sec)
progress 8% (read 2751463424 bytes, zeroes = 64% (1782579200 bytes), duration 1 sec)
progress 9% (read 3095396352 bytes, zeroes = 57% (1782579200 bytes), duration 1 sec)
progress 10% (read 3439329280 bytes, zeroes = 51% (1782579200 bytes), duration 2 sec)
progress 11% (read 3783262208 bytes, zeroes = 47% (1782579200 bytes), duration 2 sec)
restore failed: reading file "/mnt/datastore/tank_hdd_2tb/.chunks/1fe0/1fe00ffb1ed81a08537dd38bc61505319294b6d0b442ce18f96b807dada9532e" failed: Input/output error (os error 5)
temporary volume 'tank_1:vm-100-disk-1' successfully removed
temporary volume 'tank_1:vm-100-disk-0' successfully removed
error before or during data restore, some or all disks were not completely restored. VM 100 state is NOT cleaned up.
TASK ERROR: command '/usr/bin/pbs-restore --repository root@pam@192.168.1.9:tank_hdd_2tb vm/100/2025-09-19T16:48:17Z drive-scsi0.img.fidx /dev/zvol/tank_1/vm-100-disk-0 --verbose --format raw --skip-zero' failed: exit code 255

What am I doing wrong ?

kind regards
Steen
 
hi,

did you try to verify the corresponding snapshot once? maybe there are reused chunks that are corrupt/broken?

Input/output error (os error 5)
indicates a problem with the underlying storage, so i'd check the journal/syslog/dmesg on the pbs host