error: inserting chunk on store 'data_zfs' failed

axl.thm

New Member
Jul 29, 2024
11
0
1
Hello,

I have a PVE (Proxmox Virtual Environment) server that backs up its virtual machines to a PBS (Proxmox Backup Server).

This PBS server had reached 100% disk space on its datastore. We cleaned up and purged the datastore to make enough room for new backups. However, when we attempt new backups, they end in error, making it impossible to back up the virtual machines.

Here is the error that occurs during the backup attempt:

()
INFO: starting new backup job: vzdump 102 --notes-template '{{guestname}}' --all 0 --storage test --mailnotification always --node bichat20 --mode snapshot
INFO: Starting Backup of VM 102 (qemu)
INFO: Backup started at 2024-07-29 11:01:23
INFO: status = running
INFO: VM Name: uptime-kuma
INFO: include disk 'scsi0' 'local:102/vm-102-disk-0.qcow2' 32G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/102/2024-07-29T09:01:23Z'
ERROR: VM 102 qmp command 'backup' failed - backup register image failed: command error: inserting chunk on store 'data_zfs' failed for bb9f8df61474d25e71fa00722318cd387396ca1736605e1248821cc0de3d3af8 - mkstemp "/mnt/datastore/data_zfs/.chunks/bb9f/bb9f8df61474d25e71fa00722318cd387396ca1736605e1248821cc0de3d3af8.tmp_XXXXXX" failed: EACCES: Permission denied
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 102 failed - VM 102 qmp command 'backup' failed - backup register image failed: command error: inserting chunk on store 'data_zfs' failed for bb9f8df61474d25e71fa00722318cd387396ca1736605e1248821cc0de3d3af8 - mkstemp "/mnt/datastore/data_zfs/.chunks/bb9f/bb9f8df61474d25e71fa00722318cd387396ca1736605e1248821cc0de3d3af8.tmp_XXXXXX" failed: EACCES: Permission denied
INFO: Failed at 2024-07-29 11:01:23
INFO: Backup job finished with errors
TASK ERROR: job errors

I can't find any solutions on forums and other support channels. Thank you in advance for any help you can provide.

Axel.
 
Hi,
We cleaned up and purged the datastore to make enough room for new backups
how exactly did you clean the datastore, maybe you did change ownership of some/all folders in the datastore, which this error you get would indicate:
ERROR: VM 102 qmp command 'backup' failed - backup register image failed: command error: inserting chunk on store 'data_zfs' failed for bb9f8df61474d25e71fa00722318cd387396ca1736605e1248821cc0de3d3af8 - mkstemp "/mnt/datastore/data_zfs/.chunks/bb9f/bb9f8df61474d25e71fa00722318cd387396ca1736605e1248821cc0de3d3af8.tmp_XXXXXX" failed: EACCES: Permission denied
 
I performed a garbage collection of the datastore on the PBS web console.
Did the garbage collection actually remove chunks? Maybe the removal is still pending? Please post the full task log for that garbage collection run.
 
Voici le journal :

002c8534dfb778d8532d743d1fb3a59ed7b34ef1ecd6d626d47df3484a856771, required by "/mnt/datastore/data_zfs/vm/1013/2024-07-10T18:44:04Z/drive-scsi0.img.fidx"
2024-07-24T18:33:16+02:00: WARN: warning: unable to access non-existent chunk 00539e8671a2fd728f46dc5ff2c68225e7ebd83fb692e21ce7ee0c4c808442e3, required by "/mnt/datastore/data_zfs/vm/1013/2024-07-10T18:44:04Z/drive-scsi0.img.fidx"
2024-07-24T18:33:16+02:00: WARN: warning: unable to access non-existent chunk 00101a5c423066dd2cccd96e5a6a721d747aa3a7bbea326b4c0bfc5a3284bdc1, required by "/mnt/datastore/data_zfs/vm/1013/2024-07-10T18:44:04Z/drive-scsi0.img.fidx"
2024-07-24T18:33:16+02:00: marked 100% (236 of 236 index files)
2024-07-24T18:33:16+02:00: found (and marked) 1 index files outside of expected directory scheme
2024-07-24T18:33:16+02:00: Start GC phase2 (sweep unused chunks)
2024-07-24T18:50:52+02:00: processed 1% (63853 chunks)
2024-07-24T19:11:00+02:00: processed 2% (136845 chunks)
2024-07-24T19:31:41+02:00: processed 3% (210202 chunks)
2024-07-24T19:52:28+02:00: processed 4% (283631 chunks)
2024-07-24T20:20:46+02:00: processed 5% (356423 chunks)
2024-07-24T20:39:36+02:00: processed 6% (429440 chunks)
2024-07-24T20:55:38+02:00: processed 7% (502374 chunks)
2024-07-24T21:15:20+02:00: processed 8% (575788 chunks)
2024-07-24T21:38:03+02:00: processed 9% (649019 chunks)
2024-07-24T22:11:12+02:00: processed 10% (721805 chunks)
2024-07-24T22:37:05+02:00: processed 11% (795003 chunks)
2024-07-24T22:57:35+02:00: processed 12% (868125 chunks)
2024-07-24T23:17:06+02:00: processed 13% (941285 chunks)
2024-07-24T23:36:54+02:00: processed 14% (1014340 chunks)
2024-07-24T23:57:09+02:00: processed 15% (1087634 chunks)
2024-07-25T00:18:12+02:00: processed 16% (1160256 chunks)
2024-07-25T00:39:10+02:00: processed 17% (1233420 chunks)
2024-07-25T01:01:14+02:00: processed 18% (1306729 chunks)
2024-07-25T01:22:38+02:00: processed 19% (1379444 chunks)
2024-07-25T01:42:17+02:00: processed 20% (1452555 chunks)
2024-07-25T02:03:52+02:00: processed 21% (1525843 chunks)
2024-07-25T02:33:02+02:00: processed 22% (1598893 chunks)
2024-07-25T02:54:10+02:00: processed 23% (1672126 chunks)
2024-07-25T03:15:21+02:00: processed 24% (1745107 chunks)
2024-07-25T03:34:52+02:00: processed 25% (1818173 chunks)
2024-07-25T03:54:02+02:00: processed 26% (1891926 chunks)
2024-07-25T04:12:16+02:00: processed 27% (1964788 chunks)
2024-07-25T04:29:13+02:00: processed 28% (2038024 chunks)
2024-07-25T04:45:16+02:00: processed 29% (2111208 chunks)
2024-07-25T05:05:29+02:00: processed 30% (2184348 chunks)
2024-07-25T05:27:12+02:00: processed 31% (2257252 chunks)
2024-07-25T05:48:52+02:00: processed 32% (2330529 chunks)
2024-07-25T06:10:25+02:00: processed 33% (2403225 chunks)
2024-07-25T06:26:45+02:00: processed 34% (2476110 chunks)
2024-07-25T06:42:37+02:00: processed 35% (2549377 chunks)
2024-07-25T07:02:48+02:00: processed 36% (2622398 chunks)
2024-07-25T07:24:40+02:00: processed 37% (2695423 chunks)
2024-07-25T07:47:43+02:00: processed 38% (2768246 chunks)
2024-07-25T08:17:45+02:00: processed 39% (2841425 chunks)
2024-07-25T08:39:01+02:00: processed 40% (2915217 chunks)
2024-07-25T08:57:57+02:00: processed 41% (2988333 chunks)
2024-07-25T09:17:18+02:00: processed 42% (3061768 chunks)
2024-07-25T09:36:10+02:00: processed 43% (3134320 chunks)
2024-07-25T09:55:30+02:00: processed 44% (3207374 chunks)
2024-07-25T10:11:21+02:00: processed 45% (3280215 chunks)
2024-07-25T10:29:14+02:00: processed 46% (3353343 chunks)
2024-07-25T10:50:52+02:00: processed 47% (3426605 chunks)
2024-07-25T11:12:07+02:00: processed 48% (3500435 chunks)
2024-07-25T11:28:49+02:00: processed 49% (3573381 chunks)
2024-07-25T11:46:21+02:00: processed 50% (3646403 chunks)
2024-07-25T12:04:18+02:00: processed 51% (3719719 chunks)
2024-07-25T12:21:35+02:00: processed 52% (3792639 chunks)
2024-07-25T12:37:49+02:00: processed 53% (3866062 chunks)
2024-07-25T12:59:35+02:00: processed 54% (3939301 chunks)
2024-07-25T13:18:32+02:00: processed 55% (4012345 chunks)
2024-07-25T13:43:01+02:00: processed 56% (4085376 chunks)
2024-07-25T14:04:31+02:00: processed 57% (4158805 chunks)
2024-07-25T14:23:13+02:00: processed 58% (4232512 chunks)
2024-07-25T14:41:17+02:00: processed 59% (4305640 chunks)
2024-07-25T14:58:47+02:00: processed 60% (4378583 chunks)
2024-07-25T15:14:49+02:00: processed 61% (4452093 chunks)
2024-07-25T15:30:29+02:00: processed 62% (4525492 chunks)
2024-07-25T15:47:56+02:00: processed 63% (4598745 chunks)
2024-07-25T16:09:05+02:00: processed 64% (4672148 chunks)
2024-07-25T16:31:19+02:00: processed 65% (4745365 chunks)
2024-07-25T16:53:11+02:00: processed 66% (4818718 chunks)
2024-07-25T17:13:32+02:00: processed 67% (4892271 chunks)
2024-07-25T17:29:24+02:00: processed 68% (4965190 chunks)
2024-07-25T17:47:58+02:00: processed 69% (5037808 chunks)
2024-07-25T18:09:39+02:00: processed 70% (5111279 chunks)
2024-07-25T18:30:32+02:00: processed 71% (5184719 chunks)
2024-07-25T18:49:32+02:00: processed 72% (5257886 chunks)
2024-07-25T19:06:47+02:00: processed 73% (5331329 chunks)
2024-07-25T19:24:38+02:00: processed 74% (5404130 chunks)
2024-07-25T19:41:09+02:00: processed 75% (5477427 chunks)
2024-07-25T19:57:00+02:00: processed 76% (5550624 chunks)
2024-07-25T20:24:35+02:00: processed 77% (5624089 chunks)
2024-07-25T20:43:12+02:00: processed 78% (5698332 chunks)
2024-07-25T20:59:03+02:00: processed 79% (5771056 chunks)
2024-07-25T21:20:02+02:00: processed 80% (5844707 chunks)
2024-07-25T21:39:00+02:00: processed 81% (5917807 chunks)
2024-07-25T21:57:14+02:00: processed 82% (5991188 chunks)
2024-07-25T22:15:24+02:00: processed 83% (6064442 chunks)
2024-07-25T22:30:14+02:00: processed 84% (6137516 chunks)
2024-07-25T22:44:03+02:00: processed 85% (6210775 chunks)
2024-07-25T22:57:32+02:00: processed 86% (6284080 chunks)
2024-07-25T23:10:52+02:00: processed 87% (6357981 chunks)
2024-07-25T23:24:11+02:00: processed 88% (6431370 chunks)
2024-07-25T23:37:22+02:00: processed 89% (6504641 chunks)
2024-07-25T23:50:31+02:00: processed 90% (6577931 chunks)
2024-07-26T00:04:01+02:00: processed 91% (6650907 chunks)
2024-07-26T00:18:34+02:00: processed 92% (6724438 chunks)
2024-07-26T00:33:03+02:00: processed 93% (6797223 chunks)
2024-07-26T00:46:53+02:00: processed 94% (6870387 chunks)
2024-07-26T00:59:45+02:00: processed 95% (6943476 chunks)
2024-07-26T01:12:44+02:00: processed 96% (7016690 chunks)
2024-07-26T01:25:51+02:00: processed 97% (7090305 chunks)
2024-07-26T01:38:37+02:00: processed 98% (7163443 chunks)
2024-07-26T01:51:28+02:00: processed 99% (7237159 chunks)
2024-07-26T02:04:15+02:00: Removed garbage: 11.678 TiB
2024-07-26T02:04:15+02:00: Removed chunks: 5965591
2024-07-26T02:04:15+02:00: Original data usage: 56.338 TiB
2024-07-26T02:04:15+02:00: On-Disk usage: 3.159 TiB (5.61%)
2024-07-26T02:04:15+02:00: On-Disk chunks: 1344765
2024-07-26T02:04:15+02:00: Deduplication factor: 17.83
2024-07-26T02:04:15+02:00: Average chunk size: 2.463 MiB
2024-07-26T02:04:15+02:00: TASK WARNINGS: 112372
 
WARN: warning: unable to access non-existent chunk
Seems you corrupted you datastore somehow. Did you manually clean up some chunks or otherwise interact with the datastore files and folders directly?

Removed garbage: 11.678 TiB
Well, at least it seems that your chunks were cleaned up as expected.
Therefore, please check the ownership and permission of the chunk folder leading to the error by running ls -la /mnt/datastore/data_zfs/.chunks/bb9f. The folder and files contained within should be owned by user and group backup
 
I just performed a manual backup on one of my virtual machines on my PVE. Here is the error that occurs :

INFO: starting new backup job: vzdump 1036 --storage test --node bichat20 --mode snapshot --remove 0 --notes-template '{{guestname}}'
INFO: Starting Backup of VM 1036 (qemu)
INFO: Backup started at 2024-07-29 12:29:21
INFO: status = running
INFO: VM Name: srv
INFO: include disk 'scsi0' 'local:1036/vm-1036-disk-1.qcow2' 250G
INFO: include disk 'efidisk0' 'local:1036/vm-1036-disk-0.qcow2' 528K
INFO: include disk 'tpmstate0' 'local:1036/vm-1036-disk-2.raw' 4M
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/1036/2024-07-29T10:29:21Z'
INFO: attaching TPM drive to QEMU for backup
INFO: skipping guest-agent 'fs-freeze', agent configured but not running?
INFO: started backup task 'd74b841c-ea56-4ed0-a243-dbb6d126db33'
INFO: resuming VM again
INFO: efidisk0: dirty-bitmap status: existing bitmap was invalid and has been cleared
INFO: scsi0: dirty-bitmap status: existing bitmap was invalid and has been cleared
INFO: tpmstate0-backup: dirty-bitmap status: created new
INFO: 0% (1.0 GiB of 250.0 GiB) in 3s, read: 350.7 MiB/s, write: 324.0 MiB/s
INFO: 0% (1.9 GiB of 250.0 GiB) in 7s, read: 220.0 MiB/s, write: 220.0 MiB/s
ERROR: backup write data failed: command error: write_data upload error: pipelined request failed: inserting chunk on store 'data_zfs' failed for 00192760055e305a97be6e12f35990eb1d399e973165dbdfa05afd68101ee27d - mkstemp "/mnt/datastore/data_zfs/.chunks/0019/00192760055e305a97be6e12f35990eb1d399e973165dbdfa05afd68101ee27d.tmp_XXXXXX" failed: ENOENT: No such file or directory
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 1036 failed - backup write data failed: command error: write_data upload error: pipelined request failed: inserting chunk on store 'data_zfs' failed for 00192760055e305a97be6e12f35990eb1d399e973165dbdfa05afd68101ee27d - mkstemp "/mnt/datastore/data_zfs/.chunks/0019/00192760055e305a97be6e12f35990eb1d399e973165dbdfa05afd68101ee27d.tmp_XXXXXX" failed: ENOENT: No such file or directory
INFO: Failed at 2024-07-29 12:29:31
INFO: Backup job finished with errors
TASK ERROR: job errors

Here is the result of the command ls -la /mnt/datastore/data_zfs/.chunks/bb9f :
1722249300137.png
 
What bothers me the most is that it seems to start the backup but then it gets interrupted by this error :

INFO: 1% (1.6 GiB of 128.0 GiB) in 3s, read: 541.3 MiB/s, write: 517.3 MiB/s
INFO: 2% (2.7 GiB of 128.0 GiB) in 9s, read: 196.0 MiB/s, write: 196.0 MiB/s
INFO: 3% (3.9 GiB of 128.0 GiB) in 16s, read: 174.3 MiB/s, write: 174.3 MiB/s
INFO: 4% (5.1 GiB of 128.0 GiB) in 25s, read: 138.7 MiB/s, write: 138.7 MiB/s
INFO: 4% (5.3 GiB of 128.0 GiB) in 27s, read: 72.0 MiB/s, write: 72.0 MiB/s
ERROR: backup write data failed: command error: write_data upload error: pipelined request failed: inserting chunk on store 'data_zfs' failed for 0008cbb9a30030b50a5965aa505f8a9ef13367d3674cd098a035688b8b6cdbb1 - mkstemp "/mnt/datastore/data_zfs/.chunks/0008/0008cbb9a30030b50a5965aa505f8a9ef13367d3674cd098a035688b8b6cdbb1.tmp_XXXXXX" failed: ENOENT: No such file or directory
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 1028 failed - backup write data failed: command error: write_data upload error: pipelined request failed: inserting chunk on store 'data_zfs' failed for 0008cbb9a30030b50a5965aa505f8a9ef13367d3674cd098a035688b8b6cdbb1 - mkstemp "/mnt/datastore/data_zfs/.chunks/0008/0008cbb9a30030b50a5965aa505f8a9ef13367d3674cd098a035688b8b6cdbb1.tmp_XXXXXX" failed: ENOENT: No such file or directory
INFO: Failed at 2024-07-29 14:27:36
INFO: Backup job finished with errors
TASK ERROR: job errors
 
What bothers me the most is that it seems to start the backup but then it gets interrupted by this error :

INFO: 1% (1.6 GiB of 128.0 GiB) in 3s, read: 541.3 MiB/s, write: 517.3 MiB/s
INFO: 2% (2.7 GiB of 128.0 GiB) in 9s, read: 196.0 MiB/s, write: 196.0 MiB/s
INFO: 3% (3.9 GiB of 128.0 GiB) in 16s, read: 174.3 MiB/s, write: 174.3 MiB/s
INFO: 4% (5.1 GiB of 128.0 GiB) in 25s, read: 138.7 MiB/s, write: 138.7 MiB/s
INFO: 4% (5.3 GiB of 128.0 GiB) in 27s, read: 72.0 MiB/s, write: 72.0 MiB/s
ERROR: backup write data failed: command error: write_data upload error: pipelined request failed: inserting chunk on store 'data_zfs' failed for 0008cbb9a30030b50a5965aa505f8a9ef13367d3674cd098a035688b8b6cdbb1 - mkstemp "/mnt/datastore/data_zfs/.chunks/0008/0008cbb9a30030b50a5965aa505f8a9ef13367d3674cd098a035688b8b6cdbb1.tmp_XXXXXX" failed: ENOENT: No such file or directory
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 1028 failed - backup write data failed: command error: write_data upload error: pipelined request failed: inserting chunk on store 'data_zfs' failed for 0008cbb9a30030b50a5965aa505f8a9ef13367d3674cd098a035688b8b6cdbb1 - mkstemp "/mnt/datastore/data_zfs/.chunks/0008/0008cbb9a30030b50a5965aa505f8a9ef13367d3674cd098a035688b8b6cdbb1.tmp_XXXXXX" failed: ENOENT: No such file or directory
INFO: Failed at 2024-07-29 14:27:36
INFO: Backup job finished with errors
TASK ERROR: job errors
But now you have a different error on a different folder. So you probably do not have that folder in the chunk store anymore.
Let me ask again, did you interact with the datastore directly? It seems that folders might be missing and/or having incorrect permissions.
 
The datastore was at 100% capacity, leaving no room for new backups.

We deleted backups via the PBS web console, but despite this, the datastore storage space did not decrease, which concerned us.

We attempted to run a garbage collector, but it was impossible due to the lack of available space on the datastore.

We performed some operations to free up storage space to allow the garbage collector to run, which took 3 days.

Once the space was cleared and the backup was resumed, here are the errors we encountered.
 
We performed some operations to free up storage space to allow the garbage collector to run, which took 3 days.
What operations exactly? It seems to me that you deleted some folders containing chunks of the chunkstore. If that is the case, I would strongly recommend to recreate missing folders with the correct ownership and permission and run a verify job. That might be cumbersome, but if you are lucky there might be backup snapshots which did not get corrupted.

Another option would be to delete the datastore and start from scratch with a new one. But in that case of course all backup snapshots will be lost.
 
Thank you very much. I recreated a datastore and it worked. Here is the result of my backup.

Can you confirm if the backup was completed successfully ? :

INFO: starting new backup job: vzdump 1036 --notes-template '{{guestname}}' --remove 0 --mode snapshot --node bichat20 --storage TEST1
INFO: Starting Backup of VM 1036 (qemu)
INFO: Backup started at 2024-07-29 15:08:32
INFO: status = running
INFO: VM Name: srv
INFO: include disk 'scsi0' 'local:1036/vm-1036-disk-1.qcow2' 250G
INFO: include disk 'efidisk0' 'local:1036/vm-1036-disk-0.qcow2' 528K
INFO: include disk 'tpmstate0' 'local:1036/vm-1036-disk-2.raw' 4M
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/1036/2024-07-29T13:08:32Z'
INFO: attaching TPM drive to QEMU for backup
INFO: skipping guest-agent 'fs-freeze', agent configured but not running?
INFO: started backup task '60c99818-907e-4a48-932b-de245162e4eb'
INFO: resuming VM again
INFO: efidisk0: dirty-bitmap status: existing bitmap was invalid and has been cleared
INFO: scsi0: dirty-bitmap status: existing bitmap was invalid and has been cleared
INFO: tpmstate0-backup: dirty-bitmap status: created new
INFO: 0% (1.0 GiB of 250.0 GiB) in 3s, read: 350.7 MiB/s, write: 324.0 MiB/s
INFO: 1% (2.7 GiB of 250.0 GiB) in 10s, read: 244.0 MiB/s, write: 244.0 MiB/s
INFO: 2% (5.1 GiB of 250.0 GiB) in 19s, read: 268.4 MiB/s, write: 268.0 MiB/s
INFO: 3% (7.6 GiB of 250.0 GiB) in 29s, read: 258.8 MiB/s, write: 258.8 MiB/s
INFO: 4% (10.2 GiB of 250.0 GiB) in 38s, read: 301.8 MiB/s, write: 301.8 MiB/s
INFO: 5% (12.6 GiB of 250.0 GiB) in 45s, read: 353.1 MiB/s, write: 348.6 MiB/s
INFO: 6% (15.2 GiB of 250.0 GiB) in 53s, read: 321.0 MiB/s, write: 314.5 MiB/s
INFO: 7% (17.5 GiB of 250.0 GiB) in 1m 1s, read: 303.5 MiB/s, write: 299.0 MiB/s
INFO: 8% (20.1 GiB of 250.0 GiB) in 1m 9s, read: 325.0 MiB/s, write: 320.0 MiB/s
INFO: 9% (22.6 GiB of 250.0 GiB) in 1m 19s, read: 264.0 MiB/s, write: 256.8 MiB/s
INFO: 10% (25.3 GiB of 250.0 GiB) in 1m 28s, read: 298.2 MiB/s, write: 296.4 MiB/s
INFO: 11% (27.6 GiB of 250.0 GiB) in 1m 36s, read: 294.5 MiB/s, write: 292.0 MiB/s
INFO: 12% (30.2 GiB of 250.0 GiB) in 1m 46s, read: 274.0 MiB/s, write: 272.0 MiB/s
INFO: 13% (32.7 GiB of 250.0 GiB) in 2m 2s, read: 156.2 MiB/s, write: 155.2 MiB/s
INFO: 14% (35.2 GiB of 250.0 GiB) in 2m 12s, read: 257.2 MiB/s, write: 235.2 MiB/s
INFO: 15% (37.8 GiB of 250.0 GiB) in 2m 20s, read: 336.0 MiB/s, write: 287.5 MiB/s
INFO: 16% (40.0 GiB of 250.0 GiB) in 2m 26s, read: 378.0 MiB/s, write: 352.0 MiB/s
INFO: 17% (42.5 GiB of 250.0 GiB) in 2m 41s, read: 169.9 MiB/s, write: 169.9 MiB/s
INFO: 18% (45.1 GiB of 250.0 GiB) in 3m 4s, read: 113.9 MiB/s, write: 113.9 MiB/s
INFO: 19% (47.6 GiB of 250.0 GiB) in 3m 22s, read: 143.3 MiB/s, write: 135.6 MiB/s
INFO: 20% (50.1 GiB of 250.0 GiB) in 3m 32s, read: 254.0 MiB/s, write: 248.8 MiB/s
INFO: 21% (52.5 GiB of 250.0 GiB) in 3m 52s, read: 124.2 MiB/s, write: 124.2 MiB/s
INFO: 22% (55.2 GiB of 250.0 GiB) in 4m 4s, read: 233.0 MiB/s, write: 232.3 MiB/s
INFO: 23% (57.6 GiB of 250.0 GiB) in 4m 10s, read: 396.7 MiB/s, write: 396.7 MiB/s
INFO: 24% (60.2 GiB of 250.0 GiB) in 4m 21s, read: 241.1 MiB/s, write: 241.1 MiB/s
INFO: 25% (63.0 GiB of 250.0 GiB) in 4m 32s, read: 267.3 MiB/s, write: 267.3 MiB/s
INFO: 26% (65.1 GiB of 250.0 GiB) in 4m 38s, read: 348.7 MiB/s, write: 321.3 MiB/s
INFO: 27% (67.7 GiB of 250.0 GiB) in 4m 46s, read: 339.0 MiB/s, write: 299.0 MiB/s
INFO: 28% (70.1 GiB of 250.0 GiB) in 4m 51s, read: 495.2 MiB/s, write: 460.8 MiB/s
INFO: 29% (72.8 GiB of 250.0 GiB) in 5m 2s, read: 244.0 MiB/s, write: 208.7 MiB/s
INFO: 30% (75.1 GiB of 250.0 GiB) in 5m 10s, read: 295.5 MiB/s, write: 196.0 MiB/s
INFO: 31% (77.6 GiB of 250.0 GiB) in 5m 14s, read: 649.0 MiB/s, write: 415.0 MiB/s
INFO: 32% (80.1 GiB of 250.0 GiB) in 5m 20s, read: 424.0 MiB/s, write: 384.7 MiB/s
INFO: 33% (82.7 GiB of 250.0 GiB) in 5m 26s, read: 439.3 MiB/s, write: 381.3 MiB/s
INFO: 34% (85.1 GiB of 250.0 GiB) in 5m 32s, read: 416.0 MiB/s, write: 372.0 MiB/s
INFO: 35% (87.9 GiB of 250.0 GiB) in 5m 41s, read: 314.2 MiB/s, write: 267.1 MiB/s
INFO: 36% (90.0 GiB of 250.0 GiB) in 5m 47s, read: 372.7 MiB/s, write: 354.7 MiB/s
INFO: 37% (92.6 GiB of 250.0 GiB) in 5m 57s, read: 257.2 MiB/s, write: 222.8 MiB/s
INFO: 38% (95.3 GiB of 250.0 GiB) in 6m 7s, read: 278.0 MiB/s, write: 230.8 MiB/s
INFO: 39% (97.5 GiB of 250.0 GiB) in 6m 12s, read: 459.2 MiB/s, write: 417.6 MiB/s
INFO: 41% (102.9 GiB of 250.0 GiB) in 6m 18s, read: 924.7 MiB/s, write: 320.0 MiB/s
INFO: 42% (105.0 GiB of 250.0 GiB) in 6m 29s, read: 194.9 MiB/s, write: 194.9 MiB/s
INFO: 43% (107.6 GiB of 250.0 GiB) in 6m 39s, read: 265.6 MiB/s, write: 248.0 MiB/s
INFO: 47% (119.6 GiB of 250.0 GiB) in 6m 42s, read: 4.0 GiB/s, write: 82.7 MiB/s
INFO: 48% (120.8 GiB of 250.0 GiB) in 6m 45s, read: 421.3 MiB/s, write: 418.7 MiB/s
INFO: 49% (122.7 GiB of 250.0 GiB) in 6m 49s, read: 467.0 MiB/s, write: 308.0 MiB/s
INFO: 61% (153.5 GiB of 250.0 GiB) in 6m 52s, read: 10.3 GiB/s, write: 0 B/s
INFO: 75% (187.8 GiB of 250.0 GiB) in 6m 55s, read: 11.4 GiB/s, write: 46.7 MiB/s
INFO: 82% (206.7 GiB of 250.0 GiB) in 6m 58s, read: 6.3 GiB/s, write: 146.7 MiB/s
INFO: 83% (207.9 GiB of 250.0 GiB) in 7m 1s, read: 430.7 MiB/s, write: 428.0 MiB/s
INFO: 84% (210.3 GiB of 250.0 GiB) in 7m 7s, read: 406.7 MiB/s, write: 358.7 MiB/s
INFO: 88% (222.0 GiB of 250.0 GiB) in 7m 12s, read: 2.3 GiB/s, write: 216.8 MiB/s
INFO: 92% (231.3 GiB of 250.0 GiB) in 7m 15s, read: 3.1 GiB/s, write: 208.0 MiB/s
INFO: 99% (249.8 GiB of 250.0 GiB) in 7m 18s, read: 6.1 GiB/s, write: 72.0 MiB/s
INFO: 100% (250.0 GiB of 250.0 GiB) in 7m 21s, read: 81.5 MiB/s, write: 81.5 MiB/s
INFO: Waiting for server to finish backup validation...
INFO: backup is sparse: 143.77 GiB (57%) total zero data
INFO: backup was done incrementally, reused 143.77 GiB (57%)
INFO: transferred 250.00 GiB in 445 seconds (575.3 MiB/s)
INFO: adding notes to backup
INFO: Finished Backup of VM 1036 (00:07:28)
INFO: Backup finished at 2024-07-29 15:16:00
INFO: Backup job finished successfully
TASK OK

Previously, my backups appeared in the "Verify State" section with a green checkmark with the mention "Ok", but now the backups show a yellow ? mark with the mention "none.". Why ?
 

Attachments

  • 1722258990566.png
    1722258990566.png
    10 KB · Views: 3
  • 1722259011700.png
    1722259011700.png
    2 KB · Views: 2
This means the backup was not yet verified, check your verify job settings.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!