Unable to load chunck ... Data blob has wrong CRC checksum.

kurdam

Active Member
Sep 29, 2020
45
3
28
33
Hi , since a few days i'm experiencing some pretty troublesome problems with my PBS.
I increased the size of my datastore from 4TB to 10TB and since then, I have a lot of verify tasks that are failing.
Here is one of the many failed log that i'm recieving:

Code:
()
2021-11-03T02:13:50+01:00: Automatically verifying newly added snapshot
2021-11-03T02:13:50+01:00: verify Filer3:ct/102/2021-11-03T01:00:02Z
2021-11-03T02:13:50+01:00:   check pct.conf.blob
2021-11-03T02:13:50+01:00:   check root.pxar.didx
2021-11-03T02:14:25+01:00: can't verify chunk, load failed - store 'Filer3', unable to load chunk '02afb848a1352d195663d5f05ddf22dfe7acb8e0a80eab18fbdb1d89c8547c2e' - unable to parse raw blob - wrong magic
2021-11-03T02:14:25+01:00: corrupted chunk renamed to "/mnt/Filer3/.chunks/02af/02afb848a1352d195663d5f05ddf22dfe7acb8e0a80eab18fbdb1d89c8547c2e.0.bad"
2021-11-03T02:25:47+01:00: can't verify chunk, load failed - store 'Filer3', unable to load chunk '6cc08edc6c11418eefdd0b7ddd78c80dba12137889b9ec9d23c4b6a96e5ddf22' - Data blob has wrong CRC checksum.
2021-11-03T02:25:47+01:00: corrupted chunk renamed to "/mnt/Filer3/.chunks/6cc0/6cc08edc6c11418eefdd0b7ddd78c80dba12137889b9ec9d23c4b6a96e5ddf22.0.bad"
2021-11-03T02:34:21+01:00: can't verify chunk, load failed - store 'Filer3', unable to load chunk 'bcca078640b1a93c2b2ec1b2d86f225d259ffa06d0ad4b584f011a6580d447be' - Data blob has wrong CRC checksum.
2021-11-03T02:34:21+01:00: corrupted chunk renamed to "/mnt/Filer3/.chunks/bcca/bcca078640b1a93c2b2ec1b2d86f225d259ffa06d0ad4b584f011a6580d447be.0.bad"
2021-11-03T02:41:32+01:00:   verified 48621.34/85962.92 MiB in 1662.36 seconds, speed 29.25/51.71 MiB/s (3 errors)
2021-11-03T02:41:32+01:00: verify Filer3:ct/102/2021-11-03T01:00:02Z/root.pxar.didx failed: chunks could not be verified
2021-11-03T02:41:32+01:00:   check catalog.pcat1.didx
2021-11-03T02:41:33+01:00:   verified 0.40/1.17 MiB in 0.15 seconds, speed 2.62/7.61 MiB/s (0 errors)
2021-11-03T02:41:33+01:00: TASK ERROR: verification failed - please check the log for details

It's a pretty serious problem for me as it's blocking me from restoring certain VMs

Code:
  Wiping dos signature on /dev/serv6-nvme/vm-1014-disk-0.
  Logical volume "vm-1014-disk-0" created.
new volume ID is 'serv6-nvme:vm-1014-disk-0'
restore proxmox backup image: /usr/bin/pbs-restore --repository root@pam@10.10.0.25:Filer3 vm/1014/2021-11-03T08:15:02Z drive-ide0.img.fidx /dev/serv6-nvme/vm-1014-disk-0 --verbose --format raw
connecting to repository 'root@pam@10.10.0.25:Filer3'
open block backend for target '/dev/serv6-nvme/vm-1014-disk-0'
starting to restore snapshot 'vm/1014/2021-11-03T08:15:02Z'
download and verify backup index
progress 1% (read 603979776 bytes, zeroes = 15% (92274688 bytes), duration 17 sec)
progress 2% (read 1203765248 bytes, zeroes = 7% (92274688 bytes), duration 32 sec)
progress 3% (read 1807745024 bytes, zeroes = 5% (92274688 bytes), duration 48 sec)
progress 4% (read 2407530496 bytes, zeroes = 3% (92274688 bytes), duration 64 sec)
progress 5% (read 3007315968 bytes, zeroes = 3% (92274688 bytes), duration 76 sec)
progress 6% (read 3611295744 bytes, zeroes = 2% (92274688 bytes), duration 92 sec)
progress 7% (read 4211081216 bytes, zeroes = 2% (92274688 bytes), duration 105 sec)
progress 8% (read 4810866688 bytes, zeroes = 1% (92274688 bytes), duration 119 sec)
progress 9% (read 5414846464 bytes, zeroes = 1% (92274688 bytes), duration 138 sec)
progress 10% (read 6014631936 bytes, zeroes = 1% (92274688 bytes), duration 154 sec)
progress 11% (read 6614417408 bytes, zeroes = 1% (92274688 bytes), duration 170 sec)
progress 12% (read 7218397184 bytes, zeroes = 1% (92274688 bytes), duration 185 sec)
progress 13% (read 7818182656 bytes, zeroes = 1% (92274688 bytes), duration 200 sec)
progress 14% (read 8422162432 bytes, zeroes = 1% (92274688 bytes), duration 215 sec)
progress 15% (read 9021947904 bytes, zeroes = 1% (92274688 bytes), duration 226 sec)
progress 16% (read 9621733376 bytes, zeroes = 0% (92274688 bytes), duration 241 sec)
progress 17% (read 10225713152 bytes, zeroes = 0% (92274688 bytes), duration 257 sec)
progress 18% (read 10825498624 bytes, zeroes = 0% (92274688 bytes), duration 269 sec)
progress 19% (read 11425284096 bytes, zeroes = 3% (419430400 bytes), duration 275 sec)
progress 20% (read 12029263872 bytes, zeroes = 4% (549453824 bytes), duration 288 sec)
progress 21% (read 12629049344 bytes, zeroes = 4% (557842432 bytes), duration 303 sec)
progress 22% (read 13228834816 bytes, zeroes = 4% (566231040 bytes), duration 319 sec)
progress 23% (read 13832814592 bytes, zeroes = 4% (566231040 bytes), duration 336 sec)
progress 24% (read 14432600064 bytes, zeroes = 3% (570425344 bytes), duration 350 sec)
progress 25% (read 15032385536 bytes, zeroes = 3% (583008256 bytes), duration 366 sec)
progress 26% (read 15636365312 bytes, zeroes = 3% (583008256 bytes), duration 384 sec)
progress 27% (read 16236150784 bytes, zeroes = 3% (583008256 bytes), duration 400 sec)
progress 28% (read 16840130560 bytes, zeroes = 3% (583008256 bytes), duration 421 sec)
progress 29% (read 17439916032 bytes, zeroes = 3% (583008256 bytes), duration 437 sec)
progress 30% (read 18039701504 bytes, zeroes = 3% (583008256 bytes), duration 450 sec)
restore failed: reading file "/mnt/Filer3/.chunks/5cff/5cfffe3f1cac7c79282ed244bbb1791a687a0e70b3f9be8523d80a9fb37cc850" failed: No such file or directory (os error 2)
  Logical volume "vm-1014-disk-0" successfully removed
temporary volume 'serv6-nvme:vm-1014-disk-0' sucessfuly removed
error before or during data restore, some or all disks were not completely restored. VM 1014 state is NOT cleaned up.
TASK ERROR: command '/usr/bin/pbs-restore --repository root@pam@10.10.0.25:Filer3 vm/1014/2021-11-03T08:15:02Z drive-ide0.img.fidx /dev/serv6-nvme/vm-1014-disk-0 --verbose --format raw' failed: exit code 25

I'm kind of lost and i'm not seeing a lot of threads related to this problem.

Thank you in advance.
 
it seems that those chunks are broken/corrupted. i'd check the underlying storage/disks for errors.
 
The datastore is a virtual disk monted on a SMB share on a file server. It doesn't look corrupted to me as the other backups are working fine.

The only thing i did was increase the size of the corresponding vdisk from 4TB to 10TB.

Is there a way to test said vdisk to check for errors ? The SMB share in on a windows server 2019
 
It doesn't look corrupted to me as the other backups are working fine.
the chunks in the message do look like they are corrupted, because the content does not match the checksum

Is there a way to test said vdisk to check for errors ? The SMB share in on a windows server 2019
i guess you have to check the disks/storage in the windows server?
 
Capture.PNG
I can confirm that disk0 was increased to 10TB but it only shows 4 but it's growing live.
I don't know if this behaviour id normal. I was thinking that the QCOW file would be the size of the vdisk as it would reserve that space for himself.
 
Last edited:
how exactly did you resize the disk? (pve? if yes, what does the task log say)

how is your smb server configured? (filesystem/storage/etc)
 
For the increase of size i went on PVE i selected the PBS VM -> Hardware -> hard disk (disk0) -> Resize disk and i added 6000G)

Capture.PNG

And for the share:

Capture.PNG

for the storage.cfg file

Code:
cifs: FILER3-BACKUP-PVE
        path /mnt/pve/FILER3-BACKUP-PVE
        server XXXXXXXXXXX
        share XXXXXXXXXXX
        content iso,backup,rootdir,vztmpl,snippets,images
        prune-backups keep-all=1
        username XXXXXXXXXX
 
Last edited:
Edit: I did add another Gig to the 10TB drive to see if it created logs but it doesn't seems to.
 
well my guess is still that its an error of the underlying (physical) disks of the storage server that propagate to to the chunks...
or maybe it's a memory issue, but with the data at hand, it's hard to say
 
Ok i created another thread to know how to move existing backups from one datastore to another.

What i think i'll do in order to solve my problem is:

- Create a new 10TB datastore
- Move my verified backups onto there in order to not loose too much backup history
- Change my backups rules in PVE to store on the new 10TB datastore from now on
- And finally try to resole the problem with the corrupted datastore that prevents me for doing verify.

The weird thing is that it's not doing this behaviour on all my VMs, only on certain ones.
I have to find a way to quickly resolve this issue because for now i can't restore a VM that has not been verified
 
Last edited:
As an update, i can confirm that there was a bug during my disk increase ad the new vdisk that i'm going to be using as my new main datastore took its 10TB of space whereas my old 4TB datastore (disk0) that i increased to 10TB is actually taking 4.5TB and increasing as the backups are growing.

Capture.PNG
 
I am experiencing the same checksum error. My underlying storage is fine, no errors. Quite concerning that my backups don't actually work, good thing I am just trying to move my VM between hosts.
 
I'm also experiencing this issue, none of my restorations are working at all. I've tried like 10 so far and none of them are working.

One VM which recently died. My storage zpool shows no errors. I had to disable verify backups because I have several Windows VMs whose verification could last more than 2 days and I need to do at least 1 daily backup, so backing up those VMs each 3 days weren't an option.

Backups restorations were working fine when I tested PBE on version one, since then I've restored several times with no problem, I also upgraded to version 2 several weeks or months ago, but this is the first time I had to recover a VM from a backup, since the upgrade, that backup had been done with version 2 (if that matters at all, I mean I'm not trying to do a restore from a v1 backup).

I can browse the backups, but when I try to download anything, it just downloads a few bytes. I can restore any of them.

before clicking on "Post reply" here, I've tried to verify several VMs backups, all failed, but CT seems to pass the verification.
 
I'm also experiencing this issue, none of my restorations are working at all. I've tried like 10 so far and none of them are working.

One VM which recently died. My storage zpool shows no errors. I had to disable verify backups because I have several Windows VMs whose verification could last more than 2 days and I need to do at least 1 daily backup, so backing up those VMs each 3 days weren't an option.

Backups restorations were working fine when I tested PBE on version one, since then I've restored several times with no problem, I also upgraded to version 2 several weeks or months ago, but this is the first time I had to recover a VM from a backup, since the upgrade, that backup had been done with version 2 (if that matters at all, I mean I'm not trying to do a restore from a v1 backup).

I can browse the backups, but when I try to download anything, it just downloads a few bytes. I can restore any of them.

before clicking on "Post reply" here, I've tried to verify several VMs backups, all failed, but CT seems to pass the verification.
what errors do you get during restore/verify ?
 
please be more specific, e.g. post the relevant part of the task log
Sorry, Dominik ;)

2022-04-19T08:29:56+02:00: starting garbage collection on store STRAUSS 2022-04-19T08:29:56+02:00: Start GC phase1 (mark used chunks) 2022-04-19T08:32:38+02:00: WARN: warning: unable to access non-existent chunk 23082d6bb5b70fb3705144344460abf6e677e54a78a1b765e0eb8cd323320e91, required by "/STRAUSS/vm/2000/2022-04-14T15:00:02Z/drive-scsi0.img.fidx" 2022-04-19T08:32:38+02:00: WARN: warning: unable to access non-existent chunk 29f367ce38c5f6517224e4709e98dc77b07908630815e619a9a04d57368600ed, required by "/STRAUSS/vm/2000/2022-04-14T15:00:02Z/drive-scsi0.img.fidx" 2022-04-19T08:32:48+02:00: marked 1% (1 of 96 index files) 2022-04-19T08:32:48+02:00: marked 2% (2 of 96 index files) 2022-04-19T08:32:58+02:00: WARN: warning: unable to access non-existent chunk 23082d6bb5b70fb3705144344460abf6e677e54a78a1b765e0eb8cd323320e91, required by "/STRAUSS/vm/2000/2022-04-15T15:00:02Z/drive-scsi0.img.fidx" 2022-04-19T08:32:58+02:00: WARN: warning: unable to access non-existent chunk 29f367ce38c5f6517224e4709e98dc77b07908630815e619a9a04d57368600ed, required by "/STRAUSS/vm/2000/2022-04-15T15:00:02Z/drive-scsi0.img.fidx" 2022-04-19T08:32:59+02:00: marked 3% (3 of 96 index files) 2022-04-19T08:32:59+02:00: marked 4% (4 of 96 index files) 2022-04-19T08:33:07+02:00: marked 5% (5 of 96 index files) 2022-04-19T08:33:08+02:00: marked 6% (6 of 96 index files) 2022-04-19T08:33:16+02:00: marked 7% (7 of 96 index files) 2022-04-19T08:33:16+02:00: marked 8% (8 of 96 index files) 2022-04-19T08:33:23+02:00: marked 9% (9 of 96 index files) 2022-04-19T08:33:23+02:00: marked 10% (10 of 96 index files) 2022-04-19T08:33:29+02:00: marked 11% (11 of 96 index files) 2022-04-19T08:33:29+02:00: marked 12% (12 of 96 index files) 2022-04-19T08:34:55+02:00: marked 13% (13 of 96 index files) 2022-04-19T08:34:56+02:00: WARN: warning: unable to access non-existent chunk c646b67e218b8d1bbc9163937c46cffc520c96c0a30d087840f3db16aef2f1fe, required by "/STRAUSS/vm/2001/2022-04-13T16:15:47Z/drive-scsi0.img.fidx" 2022-04-19T08:34:59+02:00: WARN: warning: unable to access non-existent chunk 4b117d54fb1603682cc8a886103da889e30a3c4824055844bc1f858afbf15885, required by "/STRAUSS/vm/2001/2022-04-13T16:15:47Z/drive-scsi0.img.fidx" 2022-04-19T08:35:01+02:00: WARN: warning: unable to access non-existent chunk 52ca90d4283aa0ca15b53065e658ed2eb669101dc4c1bf1ce9aaaee630ff59f5, required by "/STRAUSS/vm/2001/2022-04-13T16:15:47Z/drive-scsi0.img.fidx" 2022-04-19T08:35:02+02:00: WARN: warning: unable to access non-existent chunk 1990132a478de830307a01db26594bfd48c41833daacf4890296eda1c1109dbb, required by "/STRAUSS/vm/2001/2022-04-13T16:15:47Z/drive-scsi0.img.fidx" 2022-04-19T08:35:03+02:00: WARN: warning: unable to access non-existent chunk c66c015e43b33da26863a5592d6982e4fb6d672860a23887ae88ac8c95d355ec, required by "/STRAUSS/vm/2001/2022-04-13T16:15:47Z/drive-scsi0.img.fidx" 2022-04-19T08:35:03+02:00: WARN: warning: unable to access non-existent chunk af630f5241f95626e0082876b46ac501e9aa3eb8ea8ec36ca7b34668c595bd6f, required by "/STRAUSS/vm/2001/2022-04-13T16:15:47Z/drive-scsi0.img.fidx" 2022-04-19T08:35:04+02:00: WARN: warning: unable to access non-existent chunk ed1eb58cb502a802fa34cee69908d6b0f2608a093f1b7a19654365a9d08a1bfe, required by "/STRAUSS/vm/2001/2022-04-13T16:15:47Z/drive-scsi0.img.fidx" 2022-04-19T08:35:04+02:00: WARN: warning: unable to access non-existent chunk 5afd6b21ad8b466a9b08d79d88e8b9db070d19473cf55dde94eec35f1258c971, required by "/STRAUSS/vm/2001/2022-04-13T16:15:47Z/drive-scsi0.img.fidx" 2022-04-19T08:35:04+02:00: WARN: warning: unable to access non-existent chunk 6ab0b32b8430e2dbde2f5f12f374b714df3ea5bb9a757d3b7ccb668779b1b158, required by "/STRAUSS/vm/2001/2022-04-13T16:15:47Z/drive-scsi0.img.fidx" 2022-04-19T08:35:05+02:00: WARN: warning: unable to access non-existent chunk 90ac9e3f6d6c22e13735e3ca4ad02e2e2e0f487bbb8fd26dca2109125d222e46, required by "/STRAUSS/vm/2001/2022-04-13T16:15:47Z/drive-scsi0.img.fidx"
 
ok there are some chunks missing. this normally can only happen when either someone (or something, a script for example) manually deletes chunks, or the underlying fs is wrong/broken
 
Dominik,
Storage is 100% OK, because there are many other Backup-Storages without issues on this Storage.
I think i should delete this specific Backup-Storage on all our PBS`s and recreate these. A lot of work..
 
Storage is 100% OK, because there are many other Backup-Storages without issues on this Storage.
i did not mean only the underlying storage but also the filesystem. e.g. a wrong atime configuration can lead to problems
(or if the atime implementation is broken) what fs do you use ther?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!