Restore Failures

DaveRR5

Member
Sep 3, 2019
15
0
6
44
Hi All

I'm trying to restore a vm but it failed with the following output from the PVE server.

Formatting '/DataStore/images/103/vm-103-disk-0.raw', fmt=raw size=80530636800
new volume ID is 'DataStore:103/vm-103-disk-0.raw'
Formatting '/DataStore/images/103/vm-103-disk-1.raw', fmt=raw size=107374182400
new volume ID is 'DataStore:103/vm-103-disk-1.raw'
restore proxmox backup image: /usr/bin/pbs-restore --repository root@pam@pbs1:USB-Backup vm/103/2020-07-14T23:08:12Z drive-sata0.img.fidx /DataStore/images/103/vm-103-disk-0.raw --verbose --format raw --skip-zero
connecting to repository 'root@pam@pbs1:USB-Backup'
open block backend for target '/DataStore/images/103/vm-103-disk-0.raw'
starting to restore snapshot 'vm/103/2020-07-14T23:08:12Z'
download and verify backup index
progress 1% (read 805306368 bytes, zeroes = 59% (482344960 bytes), duration 9 sec)
progress 2% (read 1610612736 bytes, zeroes = 30% (499122176 bytes), duration 48 sec)
progress 3% (read 2415919104 bytes, zeroes = 20% (499122176 bytes), duration 87 sec)
progress 4% (read 3221225472 bytes, zeroes = 15% (499122176 bytes), duration 125 sec)
progress 5% (read 4026531840 bytes, zeroes = 12% (499122176 bytes), duration 150 sec)
progress 6% (read 4831838208 bytes, zeroes = 10% (499122176 bytes), duration 181 sec)
progress 7% (read 5637144576 bytes, zeroes = 8% (499122176 bytes), duration 231 sec)
progress 8% (read 6442450944 bytes, zeroes = 7% (499122176 bytes), duration 288 sec)
progress 9% (read 7247757312 bytes, zeroes = 6% (499122176 bytes), duration 323 sec)
progress 10% (read 8053063680 bytes, zeroes = 6% (499122176 bytes), duration 357 sec)
progress 11% (read 8858370048 bytes, zeroes = 5% (499122176 bytes), duration 386 sec)
restore failed: blob too small (0 bytes).
temporary volume 'DataStore:103/vm-103-disk-1.raw' sucessfuly removed
temporary volume 'DataStore:103/vm-103-disk-0.raw' sucessfuly removed
TASK ERROR: command '/usr/bin/pbs-restore --repository root@pam@pbs1:USB-Backup vm/103/2020-07-14T23:08:12Z drive-sata0.img.fidx /DataStore/images/103/vm-103-disk-0.raw --verbose --format raw --skip-zero' failed: exit code 255


I ran a verification from the PBS server and get the same error

2020-07-19T11:01:21+01:00: verify USB-Backup:vm/103/2020-07-18T23:14:52Z
2020-07-19T11:01:21+01:00: check qemu-server.conf.blob
2020-07-19T11:01:21+01:00: check drive-sata0.img.fidx
2020-07-19T11:02:04+01:00: verify USB-Backup:vm/103/2020-07-18T23:14:52Z/drive-sata0.img.fidx failed: blob too small (0 bytes).

Is it possible to download the virtual disks in and import manually?

Any help is greatly appreciated

Thanks
Dave
 
this would imply that a chunk file is empty. can you try the following:
Code:
find PATH/OF/DATASTORE/.chunks -type f -empty
 
same em here in sync log:
2020-07-31T09:37:04+02:00: re-sync snapshot "ct/110/2020-07-30T20:23:07Z" done 2020-07-31T09:37:05+02:00: re-sync snapshot "ct/111/2020-07-30T20:25:32Z" 2020-07-31T09:37:05+02:00: no data changes 2020-07-31T09:37:05+02:00: re-sync snapshot "ct/111/2020-07-30T20:25:32Z" done 2020-07-31T09:37:05+02:00: sync snapshot "host/frgrshpes013/2020-07-28T19:00:01Z" 2020-07-31T09:37:05+02:00: sync archive root.pxar.didx 2020-07-31T09:37:05+02:00: sync group host/frgrshpes013 failed - blob too small (0 bytes). 2020-07-31T09:37:05+02:00: re-sync snapshot "host/frgrshpes014/2020-07-30T19:00:01Z" 2020-07-31T09:37:05+02:00: no data changes 2020-07-31T09:37:05+02:00: re-sync snapshot "host/frgrshpes014/2020-07-30T19:00:01Z" done 2020-07-31T09:37:05+02:00: sync snapshot "host/frgrshpes015/2020-07-28T19:00:01Z" 2020-07-31T09:37:05+02:00: sync archive root.pxar.didx 2020-07-31T09:37:05+02:00: sync group host/frgrshpes015 failed - blob too small (0 bytes). 2020-07-31T09:37:05+02:00: re-sync snapshot "vm/101/2020-07-30T19:00:03Z" 2020-07-31T09:37:05+02:00: no data changes 2020-07-31T09:37:05+02:00: re-sync snapshot "vm/101/2020-07-30T19:00:03Z" done

the find command does not return anything on target ds, but it returns 180 empty chunks on the source ds (even after garbage collection)

Cheers,
luphi
 
on the target they are not created because the read already fails. written chunks should never be empty - they contain a header. did you set any filesystem related options that might interfere? on what kind of storage is the datastore located?
 
can you do a verify on the source datastore?
 
PBS is installed by ISO in a VM, 2 disks (OS+DS, both ext4)
host PVE is running on zfs
verify ends with the same error "Blob too small"

but it looks like only host backups are affected.
currently verification is running across the whole DS....
I will report once done
 
could you post the full verify output? thanks!
 
could you post the full verify output?
root@pbs-saumur:/datastore/saumur# proxmox-backup-manager verify saumur verify datastore saumur verify group saumur:host/frgrshpes015 verify saumur:host/frgrshpes015/2020-07-31T07:54:50Z check root.pxar.didx verify saumur:host/frgrshpes015/2020-07-31T07:54:50Z/root.pxar.didx failed: blob too small (0 bytes). check catalog.pcat1.didx verify group saumur:host/frgrshpes013 verify saumur:host/frgrshpes013/2020-07-31T07:52:58Z check root.pxar.didx verify saumur:host/frgrshpes013/2020-07-31T07:52:58Z/root.pxar.didx failed: blob too small (0 bytes). check catalog.pcat1.didx verify group saumur:host/frgrshpes014 verify saumur:host/frgrshpes014/2020-07-31T07:54:48Z check root.pxar.didx check catalog.pcat1.didx verify group saumur:vm/104 verify saumur:vm/104/2020-07-30T19:31:33Z check qemu-server.conf.blob check drive-scsi0.img.fidx verify saumur:vm/104/2020-07-29T19:27:48Z check qemu-server.conf.blob check drive-scsi0.img.fidx verify saumur:vm/104/2020-07-29T09:16:45Z check qemu-server.conf.blob check drive-scsi0.img.fidx verify group saumur:vm/105 verify saumur:vm/105/2020-07-30T19:33:26Z check qemu-server.conf.blob check drive-scsi0.img.fidx verify saumur:vm/105/2020-07-29T19:29:23Z check qemu-server.conf.blob check drive-scsi0.img.fidx verify saumur:vm/105/2020-07-28T19:28:27Z check qemu-server.conf.blob check drive-scsi0.img.fidx verify group saumur:vm/106 verify saumur:vm/106/2020-07-30T20:12:46Z check qemu-server.conf.blob check drive-scsi0.img.fidx verify saumur:vm/106/2020-07-29T20:11:08Z check qemu-server.conf.blob check drive-scsi0.img.fidx
still running, the rest will follow....
 
I am having this blob issue as well.
I am having trouble restoring a Windows Server 2019 Standard VM backup wich shows as 1.17 TiB.
On ther other hand, I have a 32 GiB Debian VM wich restores fine.
I am running a verify task on the PBS server as well.
 
Any developments? If the verification fails, is there any way to repair it? Or remove those backups and start new?
Does it matter whether backups are Snapshot, Suspend or Stop for the incremental VMs?
Thanks.
 
@Jarvar could you provide more information about your setup (HW, storage, filesystem). Logs from around the time of a 'broken' backup would be interesting as well. Are only older backups affected, or also some that were made since the last PBS package upgrade?
 
@fabian

My setup is almost like @luphi
I have made a vzdump and brought it local to make a backup to PBS as well, however I got the same error of the blob being too small. My other non vm backups work, but not the Windows Server 2016 one.
I'm wondering if it could be my disks which are VM drives.
1-32GB OS drive,
2 x DS1 and DS2
I made a third USB Drive Pass through to the PBS VM which is ~3.7TB,
I am backing up to that now to see if there is any change since it's not a VM disk, but an external drive.
The two DS1 and DS2 are 200 GB EXT4 format.
I formatted the USB Drive as DS3 in EXT4 as well.

I have a Dell R340 E-2174G wih 32 GB ECC Unbuffered ram.
dual 240GB SSD in raid 1 zfs, and dual 960 GB SSD in raid 1 zfs for storage and VMS.
They are also connected to an NFS server from Synology to store VMs and backups.
 
An update. I did a backup from my local network with the downloaded remote backup and the verification passed. Now I am going to try and complete and incremental backup from the remote location. The image I had is a couple days old so this may take while to sync up.
So to the external USB passed to the PBS VM, the local backup worked and the verification passed.
Remote backup is currently in progress now...
 
I have gotten a few week without error. However, for a particulat VM I am getting Blob too small error again. I went back to the last VM backup without errors and deleted the ones with errors.
Ran a backup using Stopped, still got an error.
Odd behaviour since another Node I am backing up had something smililar, I got it fixed by doing the same thing....
Any ideas?
Thank you in advance.
 
did you wait a day and run GC between deleting snapshots and re-trying the backup?
 
@fabian
I did not do that. Should I?
I am trying to figure out how things work.
My assumption is that sometimes each subsequent backup would reuse some files from the previous backup if it deems there are no significant changes.
The log shows
107: 2020-09-07 02:30:24 INFO: backup is sparse: 911.76 GiB (81%) total zero data
107: 2020-09-07 02:30:24 INFO: backup was done incrementally, reused 1.06 TiB (96%)
107: 2020-09-07 02:30:24 INFO: transferred 1.09 TiB in 11532 seconds (99.2 MiB/s)
107: 2020-09-07 02:30:24 INFO: Finished Backup of VM 107 (03:12:36)

Thus if the previous backup had an erroneous file then the following one could reuse that problematic file and so on. Hence when I had a failed verification on one backup, the backups after seemed to fail as well.

So what I have done is delete the snapshots, run GC on that data store and then retrying the backup.
Should I be waiting a complete 24 hours inbetween before making another backup?

Thank you so much.
 
almost. you need to prune/forget, then wait 24h, then GC, then do the backup. see the docs: https://pbs.proxmox.com/docs/administration-guide.html#id2

we are currently implementing a mechanism to mark corrupt chunks so that a client can upload the correct one again instead of skipping it, which would make this 'dance' no longer needed.
 
  • Like
Reactions: flames
I had this error. Verified the backup which renamed the blocks as bad. Ran another backup. Verified ok. Restore OK.
Thanks.
 
I have this problem. I am verifying the backup on pbs but is there a solution to get the vm guest back as the config is now missing..
 
what exactly is "this problem"? if your backup is corrupt, you can't r restore it using the regular means..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!