Issue with missing chunks on large backups.

Dariusz Bilewicz

Active Member
Sep 13, 2018
25
0
41
42
Hi,

I have strange issue with backing up of large VMs and even doing file backups with PBS client from machines with slow NL-SAS disks.

Two machines with about of 2 TB of data where backups takes about 17 to 24 hours have "missing chunks" or "bad chunks" errors in verify task. I have another smaller VM (60GB of data) where i don't have this kind of issue. Is this a performance problem? To much iowait on those client VMs? Or maybe it's issue with my PBS VM which has only 3 GB of ram allocated?

2022-05-07T22:46:38+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk '9f58ac47be29a17bcb84019a5cf984edc64a411cea451e323711e53024fa31ae' - No such file or directory (os error 2)
2022-05-07T22:46:38+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk '73fee8b81cd2453ae5341af22a41c302aa07a18caf27d34d8683518ca6b362ae' - No such file or directory (os error 2)
2022-05-07T22:46:38+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk '40d16df476cf5909110fa2e7ad0021adfd03d6198c7d254f3df25e7b95425f09' - No such file or directory (os error 2)
2022-05-07T22:46:38+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk '35672e9aafbd88056ff949d3e9663099e293b7448d78279e7200dd2c90eb06e1' - No such file or directory (os error 2)
2022-05-07T22:46:38+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk '1dddbf5bce98d02630583d341fe005da0cd9bf7ab993ea04fdc4f335423bd6cb' - No such file or directory (os error 2)
2022-05-07T22:46:39+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk 'b2ccd08bd57a28e5061ffbb802661567108a5f0ee1af34644949f35d0f442303' - No such file or directory (os error 2)
2022-05-07T22:46:39+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk '6c6fbf1dda73d061e3d2034d0200b70d5202973b7d7c3ac43b82a3a5df1bd3f4' - No such file or directory (os error 2)
2022-05-07T22:46:39+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk '79d4a41041508a5301510c0b1162c23e67efecff0a8e177bdff8d56090f12331' - No such file or directory (os error 2)
2022-05-07T22:46:39+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk '2fa6947f094b6ecbc29bc924eee3f001afa0a05282584aba62dd0682cf0f3ad6' - No such file or directory (os error 2)
2022-05-07T22:46:39+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk 'c0bdb1bf85bf4f3b788d2075bc90d8620cc43ae6395c4fe42c1f17c75c9b5d06' - No such file or directory (os error 2)
2022-05-07T22:46:39+02:00: chunk 25204fdb68c62509b98f48c7821b78e126e1b475611bb8c4d2949819ab1a8d3e was marked as corrupt
2022-05-07T22:46:39+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk '00c6f0f23a0dc1edaf37f4d1df1a9f7a642d6b1296c84903652798efe5fd5cd7' - No such file or directory (os error 2)
2022-05-07T22:46:39+02:00: chunk 25204fdb68c62509b98f48c7821b78e126e1b475611bb8c4d2949819ab1a8d3e was marked as corrupt
2022-05-07T22:46:39+02:00: chunk 25204fdb68c62509b98f48c7821b78e126e1b475611bb8c4d2949819ab1a8d3e was marked as corrupt

Trying to access files from PBS webui
Bad Request (400)
store 'backup', unable to load chunk '5fac896918b483c047179cc98c2b6ff5f6d2de73fa380a731122db07304bbc13' - No such file or directory (os error 2)
 
mhmm normally chunk don't vanish without any user intervention (read: manually deleting chunks), but we did have a few cases here in the forum were there was never any indication why it happened...
what underlying storage do you use for the datastore? is atime/relatime enabled? did you recently (before those errors) run a garbage colletion? if yes, can you post the task log ?
 
Hi,

it's fresh install of Backup Server 2.1-1 on VM under PBS 7.1-1. Datastore is ext4 on VM under ZFS mirror on PBS. And yes, i'm running scheduled GC while there is a backup running. Is it possible that GC is removing those chunks? Backup is very long (almost 48h for file backup) due to factors like slow disks and amount of data.

ProxmoxBackup Server 2.1-1
()
2022-05-08T08:00:00+02:00: starting garbage collection on store backup
2022-05-08T08:00:00+02:00: task triggered by schedule '08:00'
2022-05-08T08:00:00+02:00: Start GC phase1 (mark used chunks)
2022-05-08T08:00:01+02:00: marked 5% (1 of 18 index files)
2022-05-08T08:03:10+02:00: marked 11% (2 of 18 index files)
2022-05-08T08:03:10+02:00: marked 16% (3 of 18 index files)
2022-05-08T08:03:11+02:00: marked 22% (4 of 18 index files)
2022-05-08T08:03:12+02:00: marked 27% (5 of 18 index files)
2022-05-08T08:03:27+02:00: marked 33% (6 of 18 index files)
2022-05-08T08:03:27+02:00: marked 38% (7 of 18 index files)
2022-05-08T08:03:30+02:00: marked 44% (8 of 18 index files)
2022-05-08T08:03:30+02:00: marked 50% (9 of 18 index files)
2022-05-08T08:03:42+02:00: marked 55% (10 of 18 index files)
2022-05-08T08:03:42+02:00: marked 61% (11 of 18 index files)
2022-05-08T08:03:48+02:00: marked 66% (12 of 18 index files)
2022-05-08T08:03:48+02:00: marked 72% (13 of 18 index files)
2022-05-08T08:03:48+02:00: marked 77% (14 of 18 index files)
2022-05-08T08:03:49+02:00: marked 83% (15 of 18 index files)
2022-05-08T08:03:49+02:00: marked 88% (16 of 18 index files)
2022-05-08T08:03:50+02:00: marked 94% (17 of 18 index files)
2022-05-08T08:03:50+02:00: marked 100% (18 of 18 index files)
2022-05-08T08:03:50+02:00: Start GC phase2 (sweep unused chunks)
2022-05-08T08:03:54+02:00: processed 1% (7602 chunks)
2022-05-08T08:03:58+02:00: processed 2% (15181 chunks)
2022-05-08T08:04:01+02:00: processed 3% (22757 chunks)
2022-05-08T08:04:11+02:00: processed 4% (30221 chunks)
2022-05-08T08:04:17+02:00: processed 5% (37602 chunks)
2022-05-08T08:04:23+02:00: processed 6% (45074 chunks)
2022-05-08T08:04:27+02:00: processed 7% (52585 chunks)
2022-05-08T08:04:32+02:00: processed 8% (60084 chunks)
2022-05-08T08:04:39+02:00: processed 9% (67546 chunks)
2022-05-08T08:04:45+02:00: processed 10% (75096 chunks)
2022-05-08T08:04:54+02:00: processed 11% (82665 chunks)
2022-05-08T08:04:57+02:00: processed 12% (90180 chunks)
2022-05-08T08:05:01+02:00: processed 13% (97618 chunks)
2022-05-08T08:05:05+02:00: processed 14% (105074 chunks)
2022-05-08T08:05:08+02:00: processed 15% (112670 chunks)
2022-05-08T08:05:12+02:00: processed 16% (120116 chunks)
2022-05-08T08:05:16+02:00: processed 17% (127603 chunks)
2022-05-08T08:05:21+02:00: processed 18% (135169 chunks)
2022-05-08T08:05:27+02:00: processed 19% (142589 chunks)
2022-05-08T08:05:33+02:00: processed 20% (149996 chunks)
2022-05-08T08:05:36+02:00: processed 21% (157562 chunks)
2022-05-08T08:05:40+02:00: processed 22% (164993 chunks)
2022-05-08T08:05:47+02:00: processed 23% (172544 chunks)
2022-05-08T08:05:52+02:00: processed 24% (180036 chunks)
2022-05-08T08:05:57+02:00: processed 25% (187603 chunks)
2022-05-08T08:06:02+02:00: processed 26% (195180 chunks)
2022-05-08T08:06:05+02:00: processed 27% (202662 chunks)
2022-05-08T08:06:11+02:00: processed 28% (210014 chunks)
2022-05-08T08:06:15+02:00: processed 29% (217484 chunks)
2022-05-08T08:06:19+02:00: processed 30% (224846 chunks)
2022-05-08T08:06:22+02:00: processed 31% (232291 chunks)
2022-05-08T08:06:26+02:00: processed 32% (239825 chunks)
2022-05-08T08:06:31+02:00: processed 33% (247301 chunks)
2022-05-08T08:06:36+02:00: processed 34% (254891 chunks)
2022-05-08T08:06:41+02:00: processed 35% (262393 chunks)
2022-05-08T08:06:47+02:00: processed 36% (269919 chunks)
2022-05-08T08:06:58+02:00: processed 37% (277470 chunks)
2022-05-08T08:07:07+02:00: processed 38% (284945 chunks)
2022-05-08T08:07:16+02:00: processed 39% (292602 chunks)
2022-05-08T08:07:23+02:00: processed 40% (300041 chunks)
2022-05-08T08:07:28+02:00: processed 41% (307467 chunks)
2022-05-08T08:07:35+02:00: processed 42% (314967 chunks)
2022-05-08T08:07:49+02:00: processed 43% (322475 chunks)
2022-05-08T08:08:00+02:00: processed 44% (329945 chunks)
2022-05-08T08:08:09+02:00: processed 45% (337652 chunks)
2022-05-08T08:08:13+02:00: processed 46% (345137 chunks)
2022-05-08T08:08:19+02:00: processed 47% (352502 chunks)
2022-05-08T08:08:24+02:00: processed 48% (359960 chunks)
2022-05-08T08:08:30+02:00: processed 49% (367349 chunks)
2022-05-08T08:08:49+02:00: processed 50% (374818 chunks)
2022-05-08T08:09:05+02:00: processed 51% (382269 chunks)
2022-05-08T08:09:15+02:00: processed 52% (389716 chunks)
2022-05-08T08:09:22+02:00: processed 53% (397130 chunks)
2022-05-08T08:09:28+02:00: processed 54% (404401 chunks)
2022-05-08T08:09:33+02:00: processed 55% (411957 chunks)
2022-05-08T08:09:38+02:00: processed 56% (419509 chunks)
2022-05-08T08:09:42+02:00: processed 57% (426998 chunks)
2022-05-08T08:09:50+02:00: processed 58% (434463 chunks)
2022-05-08T08:10:02+02:00: processed 59% (441891 chunks)
2022-05-08T08:10:08+02:00: processed 60% (449291 chunks)
2022-05-08T08:10:12+02:00: processed 61% (456743 chunks)
2022-05-08T08:10:16+02:00: processed 62% (464134 chunks)
2022-05-08T08:10:22+02:00: processed 63% (471666 chunks)
2022-05-08T08:10:28+02:00: processed 64% (479252 chunks)
2022-05-08T08:10:33+02:00: processed 65% (486680 chunks)
2022-05-08T08:10:39+02:00: processed 66% (494176 chunks)
2022-05-08T08:10:47+02:00: processed 67% (501628 chunks)
2022-05-08T08:10:52+02:00: processed 68% (509167 chunks)
2022-05-08T08:10:59+02:00: processed 69% (516627 chunks)
2022-05-08T08:11:03+02:00: processed 70% (524049 chunks)
2022-05-08T08:11:09+02:00: processed 71% (531463 chunks)
2022-05-08T08:11:14+02:00: processed 72% (539063 chunks)
2022-05-08T08:11:21+02:00: processed 73% (546565 chunks)
2022-05-08T08:11:27+02:00: processed 74% (554092 chunks)
2022-05-08T08:11:32+02:00: processed 75% (561686 chunks)
2022-05-08T08:11:37+02:00: processed 76% (569181 chunks)
2022-05-08T08:11:41+02:00: processed 77% (576557 chunks)
2022-05-08T08:11:47+02:00: processed 78% (584095 chunks)
2022-05-08T08:11:51+02:00: processed 79% (591496 chunks)
2022-05-08T08:11:58+02:00: processed 80% (598957 chunks)
2022-05-08T08:12:04+02:00: processed 81% (606544 chunks)
2022-05-08T08:12:11+02:00: processed 82% (614101 chunks)
2022-05-08T08:12:14+02:00: processed 83% (621652 chunks)
2022-05-08T08:12:19+02:00: processed 84% (629199 chunks)
2022-05-08T08:12:22+02:00: processed 85% (636732 chunks)
2022-05-08T08:12:28+02:00: processed 86% (644177 chunks)
2022-05-08T08:12:34+02:00: processed 87% (651741 chunks)
2022-05-08T08:12:40+02:00: processed 88% (659306 chunks)
2022-05-08T08:12:46+02:00: processed 89% (666844 chunks)
2022-05-08T08:12:50+02:00: processed 90% (674291 chunks)
2022-05-08T08:12:54+02:00: processed 91% (681778 chunks)
2022-05-08T08:12:58+02:00: processed 92% (689221 chunks)
2022-05-08T08:13:02+02:00: processed 93% (696793 chunks)
2022-05-08T08:13:07+02:00: processed 94% (704213 chunks)
2022-05-08T08:13:11+02:00: processed 95% (711747 chunks)
2022-05-08T08:13:15+02:00: processed 96% (719387 chunks)
2022-05-08T08:13:23+02:00: processed 97% (726869 chunks)
2022-05-08T08:13:28+02:00: processed 98% (734353 chunks)
2022-05-08T08:13:32+02:00: processed 99% (741864 chunks)
2022-05-08T08:13:36+02:00: Removed garbage: 474.969 GiB
2022-05-08T08:13:36+02:00: Removed chunks: 172171
2022-05-08T08:13:36+02:00: Pending removals: 1.462 TiB (in 487685 chunks)
2022-05-08T08:13:36+02:00: Original data usage: 471.663 GiB
2022-05-08T08:13:36+02:00: On-Disk usage: 247.322 GiB (52.44%)
2022-05-08T08:13:36+02:00: On-Disk chunks: 89337
2022-05-08T08:13:36+02:00: Deduplication factor: 1.91
2022-05-08T08:13:36+02:00: Average chunk size: 2.835 MiB
2022-05-08T08:13:37+02:00: TASK OK
 
Last edited:
Backup is very long due to factors loke slow disks and amount of data.
is it longer than 24 hours?

Is it possible that GC is removing those chunks?
that would be definitely a bug, but if you didn't manually delete chunks and the underlying storage works properly (let's say for now it does) then thats the only thing coming to mind
 
oh another question, is atime/relatime properly enabled on your datastore? you can check with
Code:
zfs get all
(best post the output here)
 
It's longer then 24h for file backup but before this i was making VM backups that took 17h and also then i had problem but with dirty bitmap that was invalid.

2022-05-03T21:01:32+02:00: starting new backup on datastore 'backup': "vm/100/2022-05-03T19:01:32Z"
2022-05-03T21:01:32+02:00: GET /previous: 400 Bad Request: no valid previous backup
2022-05-03T21:01:32+02:00: created new fixed index 1 ("vm/100/2022-05-03T19:01:32Z/drive-scsi0.img.fidx")
2022-05-03T21:01:32+02:00: add blob "/mnt/datastore/backup/vm/100/2022-05-03T19:01:32Z/qemu-server.conf.blob" (308 bytes, comp: 308)
2022-05-03T21:01:32+02:00: add blob "/mnt/datastore/backup/vm/100/2022-05-03T19:01:32Z/fw.conf.blob" (71 bytes, comp: 71)
2022-05-04T14:05:33+02:00: Upload statistics for 'drive-scsi0.img.fidx'
2022-05-04T14:05:34+02:00: UUID: ebc3c3800e524618a870556051cd447f
2022-05-04T14:05:34+02:00: Checksum: 13da9a478b9dd17ffbb3b79fd7184cdaa42f6c483a1393931d52dce93e4a383b
2022-05-04T14:05:34+02:00: Size: 3221225472000
2022-05-04T14:05:34+02:00: Chunk count: 768000
2022-05-04T14:05:34+02:00: Upload size: 2857793224704 (88%)
2022-05-04T14:05:34+02:00: Duplicates: 86649+102579 (24%)
2022-05-04T14:05:34+02:00: Compression: 74%
2022-05-04T14:05:34+02:00: successfully closed fixed index 1
2022-05-04T14:05:34+02:00: add blob "/mnt/datastore/backup/vm/100/2022-05-03T19:01:32Z/index.json.blob" (378 bytes, comp: 378)
2022-05-04T14:05:35+02:00: successfully finished backup
2022-05-04T14:05:35+02:00: backup finished successfully
2022-05-04T14:05:35+02:00: TASK OK

2022-05-04T14:05:35+02:00: Automatically verifying newly added snapshot
2022-05-04T14:05:35+02:00: verify backup:vm/100/2022-05-03T19:01:32Z
2022-05-04T14:05:35+02:00: check qemu-server.conf.blob
2022-05-04T14:05:35+02:00: check fw.conf.blob
2022-05-04T14:05:35+02:00: check drive-scsi0.img.fidx
2022-05-04T14:55:10+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk '120daae091350f143b93eb2d423d36677451f072a1beefdb915460f9f93d6ffe' - blob too small (0 bytes).
2022-05-04T14:55:10+02:00: corrupted chunk renamed to "/mnt/datastore/backup/.chunks/120d/120daae091350f143b93eb2d423d36677451f072a1beefdb915460f9f93d6ffe.0.bad"
2022-05-04T14:55:10+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk '1213b0bbe6181b401ce2fcb4d832f8db55ad51f18e958b9e23a1a67e43a9a56f' - blob too small (0 bytes).
2022-05-04T14:55:10+02:00: corrupted chunk renamed to "/mnt/datastore/backup/.chunks/1213/1213b0bbe6181b401ce2fcb4d832f8db55ad51f18e958b9e23a1a67e43a9a56f.0.bad"
2022-05-04T14:55:10+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk '12e33064a06fa842d2996f2765bbc3a9aac0986ae0b9179cac37e4afd8041f51' - blob too small (0 bytes).
2022-05-04T14:55:10+02:00: corrupted chunk renamed to "/mnt/datastore/backup/.chunks/12e3/12e33064a06fa842d2996f2765bbc3a9aac0986ae0b9179cac37e4afd8041f51.0.bad"
2022-05-04T14:57:34+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk '1385274c00e1a13e821577b8aa494123c27878816a0fbc2184310403ba9500be' - blob too small (0 bytes).
2022-05-04T14:57:34+02:00: corrupted chunk renamed to "/mnt/datastore/backup/.chunks/1385/1385274c00e1a13e821577b8aa494123c27878816a0fbc2184310403ba9500be.0.bad"
2022-05-04T14:57:34+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk '13b7345e8ea9e5902f146c37c7cdcdbef332a3218803c5358f25b6996424898f' - blob too small (0 bytes).

2022-05-04T22:30:36+02:00: corrupted chunk renamed to "/mnt/datastore/backup/.chunks/d134/d13407129fd17d36c8010cf3d68235d19893666816207fa9906801840d80f624.0.bad"
2022-05-04T22:33:18+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk 'd2aa4be4110535347070408ff47dbfd2954f8f323e2b0f6ea1fe78fb61a2ef7a' - blob too small (0 bytes).
2022-05-04T22:33:18+02:00: corrupted chunk renamed to "/mnt/datastore/backup/.chunks/d2aa/d2aa4be4110535347070408ff47dbfd2954f8f323e2b0f6ea1fe78fb61a2ef7a.0.bad"
2022-05-04T23:01:56+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk 'dd67590f3e0ba3cc4adba30e4c4ceff6b2e465a8777074ecf0003b68353c2a03' - blob too small (0 bytes).
2022-05-04T23:01:56+02:00: corrupted chunk renamed to "/mnt/datastore/backup/.chunks/dd67/dd67590f3e0ba3cc4adba30e4c4ceff6b2e465a8777074ecf0003b68353c2a03.0.bad"
2022-05-04T23:07:15+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk 'e025ba454f1ee71ffabcb655ea23ebac57010d99e5ea06374861982a96600a73' - blob too small (0 bytes).
2022-05-04T23:07:15+02:00: corrupted chunk renamed to "/mnt/datastore/backup/.chunks/e025/e025ba454f1ee71ffabcb655ea23ebac57010d99e5ea06374861982a96600a73.0.bad"
2022-05-04T23:09:49+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk 'e0bf13f67a8b5fbf738e0b7822946313cb7ac80fec806f7982f355f8b90138a3' - blob too small (0 bytes).
2022-05-04T23:09:49+02:00: corrupted chunk renamed to "/mnt/datastore/backup/.chunks/e0bf/e0bf13f67a8b5fbf738e0b7822946313cb7ac80fec806f7982f355f8b90138a3.0.bad"
2022-05-04T23:15:07+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk 'e2a7ce51b7ec3b198028f019b841e067a4563b2f40debb452807efb056ee0803' - blob too small (0 bytes).
2022-05-04T23:15:07+02:00: corrupted chunk renamed to "/mnt/datastore/backup/.chunks/e2a7/e2a7ce51b7ec3b198028f019b841e067a4563b2f40debb452807efb056ee0803.0.bad"
2022-05-04T23:22:58+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk 'e6948e0aa27b475c1aea9f89cc5fb5778c31f0d098f7b35a320b62453aa352b0' - blob too small (0 bytes).
2022-05-04T23:22:58+02:00: corrupted chunk renamed to "/mnt/datastore/backup/.chunks/e694/e6948e0aa27b475c1aea9f89cc5fb5778c31f0d098f7b35a320b62453aa352b0.0.bad"
2022-05-04T23:22:58+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk 'e5e89e30b784053aa44e09e5594da3863128dc801ffeeb61c04d5b154401d35b' - blob too small (0 bytes).
2022-05-04T23:22:58+02:00: corrupted chunk renamed to "/mnt/datastore/backup/.chunks/e5e8/e5e89e30b784053aa44e09e5594da3863128dc801ffeeb61c04d5b154401d35b.0.bad"
2022-05-04T23:38:28+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk 'ec34612505f67905e256e96239982ea67d7369d9f73ffd1ba442b1a9b842a12f' - blob too small (0 bytes).
2022-05-04T23:38:28+02:00: corrupted chunk renamed to "/mnt/datastore/backup/.chunks/ec34/ec34612505f67905e256e96239982ea67d7369d9f73ffd1ba442b1a9b842a12f.0.bad"
2022-05-04T23:43:43+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk 'edf477588aaf642ae5f4af46fcdd3daac5be3f944edd1bb237d9a5933eeebc57' - blob too small (0 bytes).
2022-05-04T23:43:43+02:00: corrupted chunk renamed to "/mnt/datastore/backup/.chunks/edf4/edf477588aaf642ae5f4af46fcdd3daac5be3f944edd1bb237d9a5933eeebc57.0.bad"
2022-05-04T23:46:20+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk 'ef564903cf91a98954f77c6bc4e40da81055828ade989af36f4b6b5d1e08e920' - blob too small (0 bytes).
2022-05-04T23:46:20+02:00: corrupted chunk renamed to "/mnt/datastore/backup/.chunks/ef56/ef564903cf91a98954f77c6bc4e40da81055828ade989af36f4b6b5d1e08e920.0.bad"
2022-05-04T23:46:20+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk 'ef201a0292feb97ea692e426324589ed5d3c36ca168e14b82c87f70d99343508' - blob too small (0 bytes).
2022-05-04T23:46:20+02:00: corrupted chunk renamed to "/mnt/datastore/backup/.chunks/ef20/ef201a0292feb97ea692e426324589ed5d3c36ca168e14b82c87f70d99343508.0.bad"
2022-05-04T23:49:00+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk 'f0aa3f343f25ff421c608fa4ebe9d68d246ce75b08bd0ecca6e155b013b30e65' - blob too small (0 bytes).
2022-05-04T23:49:00+02:00: corrupted chunk renamed to "/mnt/datastore/backup/.chunks/f0aa/f0aa3f343f25ff421c608fa4ebe9d68d246ce75b08bd0ecca6e155b013b30e65.0.bad"
2022-05-04T23:49:00+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk 'f0c957da1566572983a0a163fab2f179bf4d95d9e175110e9b08bc10c644bb92' - blob too small (0 bytes).
2022-05-04T23:49:00+02:00: corrupted chunk renamed to "/mnt/datastore/backup/.chunks/f0c9/f0c957da1566572983a0a163fab2f179bf4d95d9e175110e9b08bc10c644bb92.0.bad"
2022-05-04T23:54:19+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk 'f2232bcc20965b67c3f61203d2ce3999f1701b552dd34ff90b9725401a2fe5ad' - blob too small (0 bytes).
2022-05-04T23:54:19+02:00: corrupted chunk renamed to "/mnt/datastore/backup/.chunks/f223/f2232bcc20965b67c3f61203d2ce3999f1701b552dd34ff90b9725401a2fe5ad.0.bad"
2022-05-04T23:56:48+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk 'f3b300bdc75f780d00249162f9a0bad889dde0b9715c63383eaaaa89d5d5bcd6' - unable to parse raw blob - wrong magic
2022-05-04T23:56:48+02:00: corrupted chunk renamed to "/mnt/datastore/backup/.chunks/f3b3/f3b300bdc75f780d00249162f9a0bad889dde0b9715c63383eaaaa89d5d5bcd6.0.bad"
2022-05-04T23:56:48+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk 'f37ee02ee8937330f4e016825d4e7a4d7a91ea433af523901e1c32416f1e1d6a' - blob too small (0 bytes).
2022-05-04T23:56:48+02:00: corrupted chunk renamed to "/mnt/datastore/backup/.chunks/f37e/f37ee02ee8937330f4e016825d4e7a4d7a91ea433af523901e1c32416f1e1d6a.0.bad"
2022-05-05T00:04:51+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk 'f68c3bf79e42cda3d746d420a941d5ac38f908fb4f9906642ce06ca2a7777634' - unable to parse raw blob - wrong magic
2022-05-05T00:04:51+02:00: corrupted chunk renamed to "/mnt/datastore/backup/.chunks/f68c/f68c3bf79e42cda3d746d420a941d5ac38f908fb4f9906642ce06ca2a7777634.0.bad"
2022-05-05T00:28:12+02:00: verified 2037975.30/2725504.00 MiB in 37356.48 seconds, speed 54.55/72.96 MiB/s (79 errors)
2022-05-05T00:28:12+02:00: verify backup:vm/100/2022-05-03T19:01:32Z/drive-scsi0.img.fidx failed: chunks could not be verified
2022-05-05T00:28:12+02:00: TASK ERROR: verification failed - please check the log for details

I'm also making backups for, a smaller VM (~60GB) and there are no such problems there.
 
oh another question, is atime/relatime properly enabled on your datastore? you can check with
Code:
zfs get all
(best post the output here)
root@node-04:~# zfs get all
NAME PROPERTY VALUE SOURCE
zpool type filesystem -
zpool creation Tue May 3 14:43 2022 -
zpool used 3.63T -
zpool available 7.15T -
zpool referenced 96K -
zpool compressratio 1.00x -
zpool mounted yes -
zpool quota none default
zpool reservation none default
zpool recordsize 128K default
zpool mountpoint /zpool default
zpool sharenfs off default
zpool checksum on default
zpool compression on local
zpool atime on default
zpool devices on default
zpool exec on default
zpool setuid on default
zpool readonly off default
zpool zoned off default
zpool snapdir hidden default
zpool aclmode discard default
zpool aclinherit restricted default
zpool createtxg 1 -
zpool canmount on default
zpool xattr on default
zpool copies 1 default
zpool version 5 -
zpool utf8only off -
zpool normalization none -
zpool casesensitivity sensitive -
zpool vscan off default
zpool nbmand off default
zpool sharesmb off default
zpool refquota none default
zpool refreservation none default
zpool guid 6890056459681212495 -
zpool primarycache all default
zpool secondarycache all default
zpool usedbysnapshots 0B -
zpool usedbydataset 96K -
zpool usedbychildren 3.63T -
zpool usedbyrefreservation 0B -
zpool logbias latency default
zpool objsetid 54 -
zpool dedup off default
zpool mlslabel none default
zpool sync standard default
zpool dnodesize legacy default
zpool refcompressratio 1.00x -
zpool written 96K -
zpool logicalused 3.61T -
zpool logicalreferenced 42K -
zpool volmode default default
zpool filesystem_limit none default
zpool snapshot_limit none default
zpool filesystem_count none default
zpool snapshot_count none default
zpool snapdev hidden default
zpool acltype off default
zpool context none default
zpool fscontext none default
zpool defcontext none default
zpool rootcontext none default
zpool relatime off default
zpool redundant_metadata all default
zpool overlay on default
zpool encryption off default
zpool keylocation none default
zpool keyformat none default
zpool pbkdf2iters 0 default
zpool special_small_blocks 0 default
zpool/vm-100-disk-0 type volume -
zpool/vm-100-disk-0 creation Tue May 3 15:04 2022 -
zpool/vm-100-disk-0 used 5.32G -
zpool/vm-100-disk-0 available 7.15T -
zpool/vm-100-disk-0 referenced 5.32G -
zpool/vm-100-disk-0 compressratio 1.12x -
zpool/vm-100-disk-0 reservation none default
zpool/vm-100-disk-0 volsize 32G local
zpool/vm-100-disk-0 volblocksize 8K default
zpool/vm-100-disk-0 checksum on default
zpool/vm-100-disk-0 compression on inherited from zpool
zpool/vm-100-disk-0 readonly off default
zpool/vm-100-disk-0 createtxg 246 -
zpool/vm-100-disk-0 copies 1 default
zpool/vm-100-disk-0 refreservation none default
zpool/vm-100-disk-0 guid 12892103259511143658 -
zpool/vm-100-disk-0 primarycache all default
zpool/vm-100-disk-0 secondarycache all default
zpool/vm-100-disk-0 usedbysnapshots 0B -
zpool/vm-100-disk-0 usedbydataset 5.32G -
zpool/vm-100-disk-0 usedbychildren 0B -
zpool/vm-100-disk-0 usedbyrefreservation 0B -
zpool/vm-100-disk-0 logbias latency default
zpool/vm-100-disk-0 objsetid 259 -
zpool/vm-100-disk-0 dedup off default
zpool/vm-100-disk-0 mlslabel none default
zpool/vm-100-disk-0 sync standard default
zpool/vm-100-disk-0 refcompressratio 1.12x -
zpool/vm-100-disk-0 written 5.32G -
zpool/vm-100-disk-0 logicalused 5.93G -
zpool/vm-100-disk-0 logicalreferenced 5.93G -
zpool/vm-100-disk-0 volmode default default
zpool/vm-100-disk-0 snapshot_limit none default
zpool/vm-100-disk-0 snapshot_count none default
zpool/vm-100-disk-0 snapdev hidden default
zpool/vm-100-disk-0 context none default
zpool/vm-100-disk-0 fscontext none default
zpool/vm-100-disk-0 defcontext none default
zpool/vm-100-disk-0 rootcontext none default
zpool/vm-100-disk-0 redundant_metadata all default
zpool/vm-100-disk-0 encryption off default
zpool/vm-100-disk-0 keylocation none default
zpool/vm-100-disk-0 keyformat none default
zpool/vm-100-disk-0 pbkdf2iters 0 default
zpool/vm-100-disk-1 type volume -
zpool/vm-100-disk-1 creation Tue May 3 15:17 2022 -
zpool/vm-100-disk-1 used 3.62T -
zpool/vm-100-disk-1 available 7.15T -
zpool/vm-100-disk-1 referenced 3.62T -
zpool/vm-100-disk-1 compressratio 1.00x -
zpool/vm-100-disk-1 reservation none default
zpool/vm-100-disk-1 volsize 10T local
zpool/vm-100-disk-1 volblocksize 8K default
zpool/vm-100-disk-1 checksum on default
zpool/vm-100-disk-1 compression on inherited from zpool
zpool/vm-100-disk-1 readonly off default
zpool/vm-100-disk-1 createtxg 422 -
zpool/vm-100-disk-1 copies 1 default
zpool/vm-100-disk-1 refreservation none default
zpool/vm-100-disk-1 guid 5685380710093296415 -
zpool/vm-100-disk-1 primarycache all default
zpool/vm-100-disk-1 secondarycache all default
zpool/vm-100-disk-1 usedbysnapshots 0B -
zpool/vm-100-disk-1 usedbydataset 3.62T -
zpool/vm-100-disk-1 usedbychildren 0B -
zpool/vm-100-disk-1 usedbyrefreservation 0B -
zpool/vm-100-disk-1 logbias latency default
zpool/vm-100-disk-1 objsetid 299 -
zpool/vm-100-disk-1 dedup off default
zpool/vm-100-disk-1 mlslabel none default
zpool/vm-100-disk-1 sync standard default
zpool/vm-100-disk-1 refcompressratio 1.00x -
zpool/vm-100-disk-1 written 3.62T -
zpool/vm-100-disk-1 logicalused 3.60T -
zpool/vm-100-disk-1 logicalreferenced 3.60T -
zpool/vm-100-disk-1 volmode default default
zpool/vm-100-disk-1 snapshot_limit none default
zpool/vm-100-disk-1 snapshot_count none default
zpool/vm-100-disk-1 snapdev hidden default
zpool/vm-100-disk-1 context none default
zpool/vm-100-disk-1 fscontext none default
zpool/vm-100-disk-1 defcontext none default
zpool/vm-100-disk-1 rootcontext none default
zpool/vm-100-disk-1 redundant_metadata all default
zpool/vm-100-disk-1 encryption off default
zpool/vm-100-disk-1 keylocation none default
zpool/vm-100-disk-1 keyformat none default
zpool/vm-100-disk-1 pbkdf2iters 0 default
 
2022-05-04T14:55:10+02:00: can't verify chunk, load failed - store 'backup', unable to load chunk '120daae091350f143b93eb2d423d36677451f072a1beefdb915460f9f93d6ffe' - blob too small (0 bytes).

ok these errors indicate a different problem, it seems that the chunks to not get properly written onto the storage? they are 0 bytes, which means they cannot be valid chunks and get marked as 'bad'
which means they cannot be found for 'normal' operations

in that case i'd investigate if the zpool / disks are ok (how exactly does your configuration look like? hardware, etc)
 
Backup server is running on:

4 x Intel(R) Core(TM) i3-6100 CPU @ 3.70GHz (1 Socket) with 16 GB RAM
Datastore is on Western Digital Ultrastar DC HC520 12TB disks.
Not really speed monster but smaller backups are working fine.
 
can you post the vm config of the pbs vm too ?
 
agent: 1
boot: order=virtio0;ide2;net0
cores: 4
ide2: local:iso/proxmox-backup-server_2.1-1.iso,media=cdrom
memory: 8192
meta: creation-qemu=6.1.0,ctime=1651583046
name: PBS
net0: virtio=FE:8C:98:9D:D2:67,bridge=vmbr1,firewall=1
numa: 0
ostype: l26
scsihw: virtio-scsi-pci
smbios1: uuid=bef956e5-c379-4cf4-90a9-25c6db94967f
sockets: 1
virtio0: local-data:vm-100-disk-0,discard=on,size=32G
virtio1: local-data:vm-100-disk-1,discard=on,size=10T
vmgenid: 5a6baa24-5507-488f-868e-05a2bb7bd166
 
can you post your task list of the pbs?
Code:
proxmox-backup-manager task list  --all

the only way i can see how that could happen is if your pbs vm crashed at some point. if yes, can you tell me when that was ?
 
It's not crashing. All backups are finished successful but accessing them later is impossible.

root@pbs:~# proxmox-backup-manager task list --all
┌──────────────────────────┬──────────────────────────┬─────────────────────────
│ starttime │ endtime │ upid
╞══════════════════════════╪══════════════════════════╪═════════════════════════
│ Thu May 5 14:01:36 2022 │ Sat May 7 17:38:29 2022 │ UPID:pbs:000002A6:00000C
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Thu May 5 14:56:32 2022 │ Sat May 7 09:48:08 2022 │ UPID:pbs:000002A6:00000C
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Fri May 6 10:31:04 2022 │ Fri May 6 13:08:44 2022 │ UPID:pbs:000002A6:00000C
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Fri May 6 12:00:00 2022 │ Fri May 6 12:00:03 2022 │ UPID:pbs:000002A6:00000C
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Fri May 6 19:47:03 2022 │ Fri May 6 20:19:35 2022 │ UPID:pbs:0000022D:00000B
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Fri May 6 20:39:36 2022 │ Fri May 6 20:52:29 2022 │ UPID:pbs:000002A6:00000C
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Fri May 6 20:51:43 2022 │ Fri May 6 20:53:28 2022 │ UPID:pbs:0000022D:00000B
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sat May 7 00:00:01 2022 │ Sat May 7 00:00:15 2022 │ UPID:pbs:000002A6:00000C
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sat May 7 00:00:01 2022 │ Sat May 7 01:46:50 2022 │ UPID:pbs:000002A6:00000C
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sat May 7 00:00:02 2022 │ Sat May 7 00:12:49 2022 │ UPID:pbs:000002A6:00000C
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sat May 7 04:41:51 2022 │ Sat May 7 04:41:58 2022 │ UPID:pbs:0000E58C:01C3EC
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sat May 7 06:00:02 2022 │ Sat May 7 06:13:20 2022 │ UPID:pbs:000002A6:00000C
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sat May 7 08:00:00 2022 │ Sat May 7 08:28:47 2022 │ UPID:pbs:000002A6:00000C
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sat May 7 09:48:07 2022 │ Sat May 7 23:59:31 2022 │ UPID:pbs:000002A6:00000C
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sat May 7 12:00:00 2022 │ Sat May 7 12:00:01 2022 │ UPID:pbs:000002A6:00000C
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sat May 7 12:00:02 2022 │ Sat May 7 12:09:58 2022 │ UPID:pbs:000002A6:00000C
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sat May 7 16:17:33 2022 │ Sat May 7 16:17:49 2022 │ UPID:pbs:0000022D:00000B
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sat May 7 17:38:29 2022 │ Sun May 8 00:48:31 2022 │ UPID:pbs:000002A6:00000C
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sat May 7 18:00:01 2022 │ Sat May 7 18:14:56 2022 │ UPID:pbs:000002A6:00000C
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sat May 7 23:03:26 2022 │ Sat May 7 23:05:17 2022 │ UPID:pbs:0000022D:00000B
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sat May 7 23:51:43 2022 │ Sun May 8 00:00:04 2022 │ UPID:pbs:0000022D:00000B
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sat May 7 23:58:07 2022 │ Sat May 7 23:58:26 2022 │ UPID:pbs:000002A6:00000C
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sun May 8 00:00:00 2022 │ Sun May 8 00:00:02 2022 │ UPID:pbs:000002A6:00000C
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sun May 8 00:00:00 2022 │ Sun May 8 00:11:29 2022 │ UPID:pbs:000002A6:00000C
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sun May 8 00:00:00 2022 │ Sun May 8 00:48:35 2022 │ UPID:pbs:000002A6:00000C
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sun May 8 00:00:02 2022 │ Sun May 8 00:48:19 2022 │ UPID:pbs:000002A6:00000C
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sun May 8 00:06:44 2022 │ Sun May 8 00:07:40 2022 │ UPID:pbs:0000022D:00000B
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sun May 8 00:15:39 2022 │ Sun May 8 00:16:41 2022 │ UPID:pbs:0000022D:00000B
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sun May 8 00:17:05 2022 │ Sun May 8 00:22:52 2022 │ UPID:pbs:0000022D:00000B
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sun May 8 00:21:16 2022 │ Sun May 8 00:57:13 2022 │ UPID:pbs:0000022D:00000B
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sun May 8 00:27:20 2022 │ Sun May 8 00:27:29 2022 │ UPID:pbs:000002A6:00000C
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sun May 8 01:43:08 2022 │ Sun May 8 01:44:00 2022 │ UPID:pbs:000002DF:000026
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sun May 8 01:44:35 2022 │ Mon May 9 08:40:06 2022 │ UPID:pbs:000002DF:000026
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sun May 8 04:56:43 2022 │ Sun May 8 04:56:50 2022 │ UPID:pbs:00000515:001223
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sun May 8 06:00:00 2022 │ Sun May 8 06:13:21 2022 │ UPID:pbs:000002DF:000026
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sun May 8 06:00:01 2022 │ Mon May 9 08:40:02 2022 │ UPID:pbs:000002DF:000026
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sun May 8 08:00:00 2022 │ Sun May 8 08:13:37 2022 │ UPID:pbs:000002DF:000026
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sun May 8 08:30:00 2022 │ Sun May 8 09:34:12 2022 │ UPID:pbs:000002DF:000026
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sun May 8 12:00:00 2022 │ Sun May 8 12:00:00 2022 │ UPID:pbs:000002DF:000026
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sun May 8 12:00:01 2022 │ Sun May 8 12:18:57 2022 │ UPID:pbs:000002DF:000026
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Sun May 8 18:00:01 2022 │ Sun May 8 18:13:53 2022 │ UPID:pbs:000002DF:000026
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Mon May 9 00:00:00 2022 │ Mon May 9 00:00:18 2022 │ UPID:pbs:000002DF:000026
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Mon May 9 00:00:01 2022 │ Mon May 9 00:13:40 2022 │ UPID:pbs:000002DF:000026
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Mon May 9 01:33:09 2022 │ Mon May 9 01:33:11 2022 │ UPID:pbs:0000143F:008356
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Mon May 9 06:00:00 2022 │ Mon May 9 06:14:00 2022 │ UPID:pbs:000002DF:000026
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Mon May 9 08:00:00 2022 │ Mon May 9 08:01:28 2022 │ UPID:pbs:000002DF:000026
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Mon May 9 08:30:00 2022 │ Mon May 9 08:51:36 2022 │ UPID:pbs:000002DF:000026
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Mon May 9 08:40:14 2022 │ │ UPID:pbs:000002DF:000026
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Mon May 9 12:00:01 2022 │ Mon May 9 12:14:21 2022 │ UPID:pbs:000002DF:000026
├──────────────────────────┼──────────────────────────┼─────────────────────────
│ Mon May 9 12:24:56 2022 │ │ UPID:pbs:0000022B:000023
└──────────────────────────┴──────────────────────────┴─────────────────────────

I think i will try to use PBS v1 and see what happend.
 
ok the task log output is cutoff, can you redirect it into a file and upload it?
in that case i'd prefer the json output-format:
Code:
proxmox-backup-manager task list --all --output-format json > /tmp/tasklogs.json
It's not crashing. All backups are finished successful but accessing them later is impossible.
i meant the pbs server, not the vm that is backed up, at any time in the past not only during backup
also a 'stop' from the pve ui would have the same effect as it crashing

the theory we have is that the chunks only get written to the vm cache and never to disk, and that the vm hard stops (without flushing the cache to the disk)
 
Last edited:
I am having the same issue "can't verify chunk, load failed - store 'backup', unable to load chunk '73fee8b81cd2453ae5341af22a41c302aa07a18caf27d34d8683518ca6b362ae' - No such file or directory (os error 2)"
And then my sync job to the offsite PBS fails with a similar error.
 
I am having the same issue "can't verify chunk, load failed - store 'backup', unable to load chunk '73fee8b81cd2453ae5341af22a41c302aa07a18caf27d34d8683518ca6b362ae' - No such file or directory (os error 2)"
And then my sync job to the offsite PBS fails with a similar error.
did your pbs crash or hard stop (so no proper shutdown) at any time in the past? alternatively, does the datastore live on a network share?
these are currently the only possible explanations we have so far (aside from someone manually deleting chunks)
 
No, PBS did not crash or stop.
No, datastore is not a network share, it is a RAID10 array provided by 20 1TB HDD's via LSI9260 in a Supermicro chassis wtih 64G RAM, dual Intel E5-2620 6core.
I did delete some backups about two weeks ago - but did not delete chunks.
thanks - Ron
Yes, we have purchased subscription for these servers.
 
Hi,

Anyone has resolved this problem,

I have same issue here when trying restore backup VM. I restore backup file to another VMID and get this err:

Code:
Formatting '/mnt/pve/backup/images/147/vm-147-cloudinit.raw', fmt=raw size=4194304
new volume ID is 'backup:147/vm-147-cloudinit.raw'
Formatting '/mnt/pve/backup/images/147/vm-147-disk-0.raw', fmt=raw size=21474836480
new volume ID is 'backup:147/vm-147-disk-0.raw'
restore proxmox backup image: /usr/bin/pbs-restore --repository b101@pbs@172.16.11.240:b101 vm/1030/2022-05-28T15:42:36Z drive-virtio0.img.fidx /mnt/pve/backup/images/147/vm-147-disk-0.raw --verbose --format raw --skip-zero
connecting to repository 'b101@pbs@172.16.11.240:b101'
open block backend for target '/mnt/pve/backup/images/147/vm-147-disk-0.raw'
starting to restore snapshot 'vm/1030/2022-05-28T15:42:36Z'
download and verify backup index
restore failed: reading file "/backup/proxmox/b101/.chunks/b8cd/b8cd5efb4254adf2e0d71341834358d3fb6b2a5547d8131ea641cb503cefcb52" failed: No such file or directory (os error 2)
temporary volume 'backup:147/vm-147-disk-0.raw' sucessfuly removed
TASK ERROR: command '/usr/bin/pbs-restore --repository b101@pbs@172.16.11.240:b101 vm/1030/2022-05-28T15:42:36Z drive-virtio0.img.fidx /mnt/pve/backup/images/147/vm-147-disk-0.raw --verbose --format raw --skip-zero' failed: exit code 255

When I trying to verify again, err:

Code:
2022-06-02T18:48:42+07:00: can't verify chunk, load failed - store 'b101', unable to load chunk '2898c99a193e0c0d07bd04ee43901d766ff99952108e0fa37f55e27534286d7e' - No such file or directory (os error 2)
2022-06-02T18:48:42+07:00: can't verify chunk, load failed - store 'b101', unable to load chunk '23ec63da5c7bb842a19bd0ac286422d97e2c12ce2d3de168a814f23dc24e84ab' - No such file or directory (os error 2)
2022-06-02T18:48:42+07:00: can't verify chunk, load failed - store 'b101', unable to load chunk '262c83409ab8cc80ae9337c2734057deea78f543960963e53a242c734edd53f3' - No such file or directory (os error 2)
2022-06-02T18:48:42+07:00: can't verify chunk, load failed - store 'b101', unable to load chunk '2fe3f9b4a68907d2b0c11c9e4e1c711429f26655041b61c8db627755e7d3600b' - No such file or directory (os error 2)
2022-06-02T18:48:42+07:00: can't verify chunk, load failed - store 'b101', unable to load chunk '0fd7d49e273c2973dee8f8595f29bfcbf3a5a74cc9ae39be308569886986ed46' - No such file or directory (os error 2)
2022-06-02T18:48:42+07:00: chunk 661ba39c4cc7bc871e1919482882e09fdb0860bad384b9b533dcda107c15dffe was marked as corrupt
2022-06-02T18:48:42+07:00: can't verify chunk, load failed - store 'b101', unable to load chunk 'a183cd0285b14b4845bd5f18fad600c45fab05a512051fb70ce163592ad67da7' - No such file or directory (os error 2)
2022-06-02T18:48:42+07:00:   verified 414.63/1440.00 MiB in 172.54 seconds, speed 2.40/8.35 MiB/s (2103 errors)
2022-06-02T18:48:42+07:00: verify b101:vm/1030/2022-05-28T15:42:36Z/drive-virtio0.img.fidx failed: chunks could not be verified
2022-06-02T18:48:42+07:00: Failed to verify the following snapshots/groups:
2022-06-02T18:48:42+07:00:     vm/1030/2022-05-28T15:42:36Z
2022-06-02T18:48:42+07:00: TASK ERROR: verification failed - please check the log for details

Thanks and Regards !
 
In my case it looks like that if task is taking longer then 24h and GC was run before backup is finished, GC cleans chunks from running backup that are older than 24h which ultimatly result in failed backup when i try to verify or use it.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!