[SOLVED] Sync Fails with os error 2

Haabda

Active Member
Apr 5, 2016
10
1
43
44
Can anyone help me with this? Syncs used to work but I neglected to have a Garbage Collection task on the receiving server so the drive filled up. I have deleted all synced backups, removed and added back the remotes, but still get these errors.

2022-06-09T09:57:16-05:00: Starting datastore sync job 'Perseus:Backups:Backups:s-12c4c6ca-f307'
2022-06-09T09:57:16-05:00: Sync datastore 'Backups' from 'Perseus/Backups'
2022-06-09T09:57:16-05:00: found 19 groups to sync
2022-06-09T09:57:16-05:00: sync snapshot "ct/200/2022-04-24T08:15:02Z"
2022-06-09T09:57:16-05:00: sync archive pct.conf.blob
2022-06-09T09:57:16-05:00: sync archive fw.conf.blob
2022-06-09T09:57:16-05:00: sync archive root.pxar.didx
2022-06-09T09:57:16-05:00: percentage done: 0.75% (0/19 groups, 1/7 snapshots in group #1)
2022-06-09T09:57:16-05:00: sync group ct/200 failed - No such file or directory (os error 2)
2022-06-09T09:57:16-05:00: sync snapshot "vm/100/2022-03-26T07:30:02Z"
2022-06-09T09:57:16-05:00: sync archive qemu-server.conf.blob
2022-06-09T09:57:16-05:00: sync archive drive-virtio1.img.fidx
2022-06-09T09:57:19-05:00: percentage done: 6.02% (1/19 groups, 1/7 snapshots in group #2)
2022-06-09T09:57:19-05:00: sync group vm/100 failed - No such file or directory (os error 2)
2022-06-09T09:57:19-05:00: sync snapshot "vm/101/2022-03-19T08:23:00Z"
2022-06-09T09:57:19-05:00: sync archive qemu-server.conf.blob
2022-06-09T09:57:19-05:00: sync archive drive-virtio1.img.fidx
2022-06-09T09:57:19-05:00: percentage done: 11.28% (2/19 groups, 1/7 snapshots in group #3)
2022-06-09T09:57:19-05:00: sync group vm/101 failed - No such file or directory (os error 2)
2022-06-09T09:57:19-05:00: sync snapshot "vm/102/2022-05-01T00:45:02Z"
2022-06-09T09:57:19-05:00: sync archive qemu-server.conf.blob
2022-06-09T09:57:19-05:00: sync archive drive-virtio0.img.fidx
2022-06-09T09:57:20-05:00: percentage done: 16.54% (3/19 groups, 1/7 snapshots in group #4)
2022-06-09T09:57:20-05:00: sync group vm/102 failed - No such file or directory (os error 2)
2022-06-09T09:57:20-05:00: sync snapshot "vm/103/2022-03-26T09:03:29Z"
2022-06-09T09:57:20-05:00: sync archive qemu-server.conf.blob
2022-06-09T09:57:20-05:00: sync archive drive-virtio0.img.fidx
2022-06-09T09:57:20-05:00: percentage done: 21.80% (4/19 groups, 1/7 snapshots in group #5)
2022-06-09T09:57:20-05:00: sync group vm/103 failed - No such file or directory (os error 2)
2022-06-09T09:57:20-05:00: sync snapshot "vm/104/2022-04-30T00:02:02Z"
2022-06-09T09:57:20-05:00: sync archive qemu-server.conf.blob
2022-06-09T09:57:20-05:00: sync archive drive-scsi0.img.fidx
2022-06-09T09:57:20-05:00: percentage done: 27.07% (5/19 groups, 1/7 snapshots in group #6)
2022-06-09T09:57:20-05:00: sync group vm/104 failed - No such file or directory (os error 2)
2022-06-09T09:57:20-05:00: sync snapshot "vm/105/2022-04-30T00:03:03Z"
2022-06-09T09:57:20-05:00: sync archive qemu-server.conf.blob
2022-06-09T09:57:20-05:00: sync archive drive-virtio1.img.fidx
2022-06-09T09:57:26-05:00: percentage done: 32.33% (6/19 groups, 1/7 snapshots in group #7)
2022-06-09T09:57:26-05:00: sync group vm/105 failed - No such file or directory (os error 2)
2022-06-09T09:57:26-05:00: sync snapshot "vm/107/2022-04-30T09:02:12Z"
2022-06-09T09:57:26-05:00: sync archive qemu-server.conf.blob
2022-06-09T09:57:26-05:00: sync archive drive-scsi0.img.fidx
2022-06-09T09:57:27-05:00: percentage done: 37.59% (7/19 groups, 1/7 snapshots in group #8)
2022-06-09T09:57:27-05:00: sync group vm/107 failed - No such file or directory (os error 2)
2022-06-09T09:57:27-05:00: sync snapshot "vm/108/2022-03-26T07:30:02Z"
2022-06-09T09:57:27-05:00: sync archive qemu-server.conf.blob
2022-06-09T09:57:27-05:00: sync archive drive-scsi0.img.fidx
2022-06-09T09:57:30-05:00: percentage done: 42.86% (8/19 groups, 1/7 snapshots in group #9)
2022-06-09T09:57:30-05:00: sync group vm/108 failed - No such file or directory (os error 2)
2022-06-09T09:57:31-05:00: sync snapshot "vm/109/2022-04-30T00:06:28Z"
2022-06-09T09:57:31-05:00: sync archive qemu-server.conf.blob
2022-06-09T09:57:31-05:00: sync archive drive-scsi0.img.fidx
2022-06-09T09:57:32-05:00: percentage done: 48.12% (9/19 groups, 1/7 snapshots in group #10)
2022-06-09T09:57:32-05:00: sync group vm/109 failed - No such file or directory (os error 2)
2022-06-09T09:57:33-05:00: sync snapshot "vm/112/2022-03-27T00:47:12Z"
2022-06-09T09:57:33-05:00: sync archive qemu-server.conf.blob
2022-06-09T09:57:33-05:00: sync archive drive-scsi1.img.fidx
2022-06-09T09:57:34-05:00: percentage done: 53.38% (10/19 groups, 1/7 snapshots in group #11)
2022-06-09T09:57:34-05:00: sync group vm/112 failed - No such file or directory (os error 2)
2022-06-09T09:57:34-05:00: sync snapshot "vm/113/2022-03-26T09:10:06Z"
2022-06-09T09:57:35-05:00: sync archive qemu-server.conf.blob
2022-06-09T09:57:35-05:00: sync archive drive-scsi0.img.fidx
2022-06-09T09:57:35-05:00: percentage done: 58.65% (11/19 groups, 1/7 snapshots in group #12)
2022-06-09T09:57:35-05:00: sync group vm/113 failed - No such file or directory (os error 2)
2022-06-09T09:57:36-05:00: sync snapshot "vm/114/2022-03-27T00:56:06Z"
2022-06-09T09:57:36-05:00: sync archive qemu-server.conf.blob
2022-06-09T09:57:36-05:00: sync archive drive-scsi0.img.fidx
2022-06-09T09:57:37-05:00: percentage done: 63.91% (12/19 groups, 1/7 snapshots in group #13)
2022-06-09T09:57:37-05:00: sync group vm/114 failed - No such file or directory (os error 2)
2022-06-09T09:57:37-05:00: sync snapshot "vm/115/2022-06-07T19:56:55Z"
2022-06-09T09:57:37-05:00: sync archive qemu-server.conf.blob
2022-06-09T09:57:37-05:00: sync archive drive-scsi1.img.fidx
2022-06-09T09:57:38-05:00: percentage done: 73.68% (14/19 groups)
2022-06-09T09:57:38-05:00: sync group vm/115 failed - No such file or directory (os error 2)
2022-06-09T09:57:38-05:00: sync snapshot "vm/117/2021-07-29T20:46:10Z"
2022-06-09T09:57:38-05:00: sync archive qemu-server.conf.blob
2022-06-09T09:57:38-05:00: sync archive drive-scsi1.img.fidx
2022-06-09T09:57:40-05:00: percentage done: 78.95% (15/19 groups)
2022-06-09T09:57:40-05:00: sync group vm/117 failed - No such file or directory (os error 2)
2022-06-09T09:57:41-05:00: sync snapshot "vm/400/2022-04-24T05:00:02Z"
2022-06-09T09:57:41-05:00: sync archive qemu-server.conf.blob
2022-06-09T09:57:41-05:00: sync archive drive-scsi0.img.fidx
2022-06-09T09:57:42-05:00: percentage done: 79.61% (15/19 groups, 1/8 snapshots in group #16)
2022-06-09T09:57:42-05:00: sync group vm/400 failed - No such file or directory (os error 2)
2022-06-09T09:57:42-05:00: sync snapshot "vm/401/2022-04-24T05:00:09Z"
2022-06-09T09:57:42-05:00: sync archive qemu-server.conf.blob
2022-06-09T09:57:42-05:00: sync archive drive-scsi0.img.fidx
2022-06-09T09:57:42-05:00: percentage done: 84.87% (16/19 groups, 1/8 snapshots in group #17)
2022-06-09T09:57:42-05:00: sync group vm/401 failed - No such file or directory (os error 2)
2022-06-09T09:57:42-05:00: sync snapshot "vm/402/2022-04-24T05:00:17Z"
2022-06-09T09:57:42-05:00: sync archive qemu-server.conf.blob
2022-06-09T09:57:42-05:00: sync archive drive-scsi0.img.fidx
2022-06-09T09:57:43-05:00: percentage done: 90.13% (17/19 groups, 1/8 snapshots in group #18)
2022-06-09T09:57:43-05:00: sync group vm/402 failed - No such file or directory (os error 2)
2022-06-09T09:57:43-05:00: sync snapshot "vm/403/2022-04-24T05:00:20Z"
2022-06-09T09:57:43-05:00: sync archive qemu-server.conf.blob
2022-06-09T09:57:43-05:00: sync archive drive-scsi1.img.fidx
2022-06-09T09:57:46-05:00: percentage done: 95.39% (18/19 groups, 1/8 snapshots in group #19)
2022-06-09T09:57:46-05:00: sync group vm/403 failed - No such file or directory (os error 2)
2022-06-09T09:57:47-05:00: TASK ERROR: sync failed with some errors.
 
On both servers, I am on PBS 2.0-4 and no updates show as available. I have also rebooted both PBS servers to be sure the newest kernel is loaded.
 
I see that 2.2 is out. I'll try upgrading to it and see if my problem is resolved.
 
Upgrading to 2.2 did not resolve my problem. However, the error messages are different.

Code:
2022-06-09T11:14:11-05:00: Starting datastore sync job 'Perseus:Backups:Backups::s-12c4c6ca-f307'
2022-06-09T11:14:11-05:00: sync datastore 'Backups' from 'Perseus/Backups'
2022-06-09T11:14:11-05:00: ----
2022-06-09T11:14:11-05:00: Syncing datastore 'Backups', root namespace into datastore 'Backups', root namespace
2022-06-09T11:14:11-05:00: found 19 groups to sync
2022-06-09T11:14:11-05:00: sync snapshot ct/200/2022-04-24T08:15:02Z
2022-06-09T11:14:11-05:00: sync archive pct.conf.blob
2022-06-09T11:14:11-05:00: sync archive fw.conf.blob
2022-06-09T11:14:11-05:00: sync archive root.pxar.didx
2022-06-09T11:14:11-05:00: percentage done: 0.75% (0/19 groups, 1/7 snapshots in group #1)
2022-06-09T11:14:11-05:00: sync group ct/200 failed - creating chunk on store 'Backups' failed for 04ba174076e8df4c4862a22d657ed0a86649f595c13bcc30e39a1ced2b6384b1 - No such file or directory (os error 2)
2022-06-09T11:14:11-05:00: sync snapshot vm/100/2022-03-26T07:30:02Z
2022-06-09T11:14:11-05:00: sync archive qemu-server.conf.blob
2022-06-09T11:14:11-05:00: sync archive drive-virtio1.img.fidx
2022-06-09T11:14:14-05:00: percentage done: 6.02% (1/19 groups, 1/7 snapshots in group #2)
2022-06-09T11:14:14-05:00: sync group vm/100 failed - creating chunk on store 'Backups' failed for bb9f8df61474d25e71fa00722318cd387396ca1736605e1248821cc0de3d3af8 - No such file or directory (os error 2)
2022-06-09T11:14:14-05:00: sync snapshot vm/101/2022-03-19T08:23:00Z
2022-06-09T11:14:14-05:00: sync archive qemu-server.conf.blob
2022-06-09T11:14:14-05:00: sync archive drive-virtio1.img.fidx
2022-06-09T11:14:15-05:00: percentage done: 11.28% (2/19 groups, 1/7 snapshots in group #3)
2022-06-09T11:14:15-05:00: sync group vm/101 failed - creating chunk on store 'Backups' failed for 7eef169f4426403ea7f4a295028ec987f6700dd0ce914301d761c470a836c514 - No such file or directory (os error 2)
2022-06-09T11:14:15-05:00: sync snapshot vm/102/2022-05-01T00:45:02Z
2022-06-09T11:14:15-05:00: sync archive qemu-server.conf.blob
2022-06-09T11:14:15-05:00: sync archive drive-virtio0.img.fidx
2022-06-09T11:14:17-05:00: percentage done: 16.54% (3/19 groups, 1/7 snapshots in group #4)
2022-06-09T11:14:17-05:00: sync group vm/102 failed - creating chunk on store 'Backups' failed for e31246a34c2160be8055c01a8e3b322cd6d34383dbf5c587088b3f2c3fcbd1e3 - No such file or directory (os error 2)
2022-06-09T11:14:17-05:00: sync snapshot vm/103/2022-03-26T09:03:29Z
2022-06-09T11:14:17-05:00: sync archive qemu-server.conf.blob
2022-06-09T11:14:17-05:00: sync archive drive-virtio0.img.fidx
2022-06-09T11:14:19-05:00: percentage done: 21.80% (4/19 groups, 1/7 snapshots in group #5)
2022-06-09T11:14:19-05:00: sync group vm/103 failed - creating chunk on store 'Backups' failed for 98967c4e91282ceab654afdf0ed719199c79d64c382786006ef7aa3749623812 - No such file or directory (os error 2)
2022-06-09T11:14:19-05:00: sync snapshot vm/104/2022-04-30T00:02:02Z
2022-06-09T11:14:19-05:00: sync archive qemu-server.conf.blob
2022-06-09T11:14:19-05:00: sync archive drive-scsi0.img.fidx
2022-06-09T11:14:20-05:00: percentage done: 27.07% (5/19 groups, 1/7 snapshots in group #6)
2022-06-09T11:14:20-05:00: sync group vm/104 failed - creating chunk on store 'Backups' failed for b306d0b62c6e2ed2eae9b81a0370006188da935d274c7c95f080720889850420 - No such file or directory (os error 2)
2022-06-09T11:14:21-05:00: sync snapshot vm/105/2022-04-30T00:03:03Z
2022-06-09T11:14:21-05:00: sync archive qemu-server.conf.blob
2022-06-09T11:14:21-05:00: sync archive drive-virtio1.img.fidx
2022-06-09T11:14:28-05:00: percentage done: 32.33% (6/19 groups, 1/7 snapshots in group #7)
2022-06-09T11:14:28-05:00: sync group vm/105 failed - creating chunk on store 'Backups' failed for fa3abc7cb59b462d484ca2162d642c4864e409fa1b78333eb61632a42edae4ec - No such file or directory (os error 2)
2022-06-09T11:14:28-05:00: sync snapshot vm/107/2022-04-30T09:02:12Z
2022-06-09T11:14:28-05:00: sync archive qemu-server.conf.blob
2022-06-09T11:14:28-05:00: sync archive drive-scsi0.img.fidx
2022-06-09T11:14:30-05:00: percentage done: 37.59% (7/19 groups, 1/7 snapshots in group #8)
2022-06-09T11:14:30-05:00: sync group vm/107 failed - creating chunk on store 'Backups' failed for 242b34de0a439c1ada84ba090865b67791fc34f3b5baf22f0562dddabc15fd03 - No such file or directory (os error 2)
2022-06-09T11:14:30-05:00: sync snapshot vm/108/2022-03-26T07:30:02Z
2022-06-09T11:14:30-05:00: sync archive qemu-server.conf.blob
2022-06-09T11:14:30-05:00: sync archive drive-scsi0.img.fidx
2022-06-09T11:14:32-05:00: percentage done: 42.86% (8/19 groups, 1/7 snapshots in group #9)
2022-06-09T11:14:32-05:00: sync group vm/108 failed - creating chunk on store 'Backups' failed for 2f34e9a70995448f4bf021b275d97c6f1381005c2d5211cda35b3e8479dcd390 - No such file or directory (os error 2)
2022-06-09T11:14:32-05:00: sync snapshot vm/109/2022-04-30T00:06:28Z
2022-06-09T11:14:32-05:00: sync archive qemu-server.conf.blob
2022-06-09T11:14:32-05:00: sync archive drive-scsi0.img.fidx
2022-06-09T11:14:34-05:00: percentage done: 48.12% (9/19 groups, 1/7 snapshots in group #10)
2022-06-09T11:14:34-05:00: sync group vm/109 failed - creating chunk on store 'Backups' failed for 6f1a285c343ebe5ba9439eca8400092342b32134c0c59bbf46236593bf2d0e10 - No such file or directory (os error 2)
2022-06-09T11:14:34-05:00: sync snapshot vm/112/2022-03-27T00:47:12Z
2022-06-09T11:14:34-05:00: sync archive qemu-server.conf.blob
2022-06-09T11:14:34-05:00: sync archive drive-scsi1.img.fidx
2022-06-09T11:14:38-05:00: percentage done: 53.38% (10/19 groups, 1/7 snapshots in group #11)
2022-06-09T11:14:38-05:00: sync group vm/112 failed - creating chunk on store 'Backups' failed for 547daf747b55795670ee86a45bc402597df20b4cb1262000f098f234fe1b362a - No such file or directory (os error 2)
2022-06-09T11:14:38-05:00: sync snapshot vm/113/2022-03-26T09:10:06Z
2022-06-09T11:14:38-05:00: sync archive qemu-server.conf.blob
2022-06-09T11:14:38-05:00: sync archive drive-scsi0.img.fidx
2022-06-09T11:14:40-05:00: percentage done: 58.65% (11/19 groups, 1/7 snapshots in group #12)
2022-06-09T11:14:40-05:00: sync group vm/113 failed - creating chunk on store 'Backups' failed for 8bae7992883d8074a22bd9d931fb0840f937c362c5a4a407a4184ec1c7776e17 - No such file or directory (os error 2)
2022-06-09T11:14:40-05:00: sync snapshot vm/114/2022-03-27T00:56:06Z
2022-06-09T11:14:40-05:00: sync archive qemu-server.conf.blob
2022-06-09T11:14:40-05:00: sync archive drive-scsi0.img.fidx
2022-06-09T11:14:42-05:00: percentage done: 63.91% (12/19 groups, 1/7 snapshots in group #13)
2022-06-09T11:14:43-05:00: sync group vm/114 failed - creating chunk on store 'Backups' failed for 2a43346d8b2458604c6a2f9ed1f72cf15d5f14dd864460acef077904a968db91 - No such file or directory (os error 2)
2022-06-09T11:14:43-05:00: sync snapshot vm/115/2022-06-07T19:56:55Z
2022-06-09T11:14:43-05:00: sync archive qemu-server.conf.blob
2022-06-09T11:14:43-05:00: sync archive drive-scsi1.img.fidx
2022-06-09T11:14:44-05:00: percentage done: 73.68% (14/19 groups)
2022-06-09T11:14:44-05:00: sync group vm/115 failed - creating chunk on store 'Backups' failed for feb29652ade3a6ad5ee1ba833691688634aadf9fc3650411d4535128e2da886d - No such file or directory (os error 2)
2022-06-09T11:14:44-05:00: sync snapshot vm/117/2021-07-29T20:46:10Z
2022-06-09T11:14:44-05:00: sync archive qemu-server.conf.blob
2022-06-09T11:14:44-05:00: sync archive drive-scsi1.img.fidx
2022-06-09T11:14:46-05:00: percentage done: 78.95% (15/19 groups)
2022-06-09T11:14:46-05:00: sync group vm/117 failed - creating chunk on store 'Backups' failed for bb9f8df61474d25e71fa00722318cd387396ca1736605e1248821cc0de3d3af8 - No such file or directory (os error 2)
2022-06-09T11:14:46-05:00: sync snapshot vm/400/2022-04-24T05:00:02Z
2022-06-09T11:14:46-05:00: sync archive qemu-server.conf.blob
2022-06-09T11:14:46-05:00: sync archive drive-scsi0.img.fidx
2022-06-09T11:14:47-05:00: percentage done: 79.61% (15/19 groups, 1/8 snapshots in group #16)
2022-06-09T11:14:47-05:00: sync group vm/400 failed - creating chunk on store 'Backups' failed for 7d98d2a1dea68aea09864e160caf3db0cb252fa032c80ae9a761da5dec88642f - No such file or directory (os error 2)
2022-06-09T11:14:47-05:00: sync snapshot vm/401/2022-04-24T05:00:09Z
2022-06-09T11:14:47-05:00: sync archive qemu-server.conf.blob
2022-06-09T11:14:47-05:00: sync archive drive-scsi0.img.fidx
2022-06-09T11:14:48-05:00: percentage done: 84.87% (16/19 groups, 1/8 snapshots in group #17)
2022-06-09T11:14:48-05:00: sync group vm/401 failed - creating chunk on store 'Backups' failed for 5b34f51de592caf114ef9ccaba8aff8ba2cdb5539b9df6b03d738fc4bf59f3a6 - No such file or directory (os error 2)
2022-06-09T11:14:48-05:00: sync snapshot vm/402/2022-04-24T05:00:17Z
2022-06-09T11:14:48-05:00: sync archive qemu-server.conf.blob
2022-06-09T11:14:48-05:00: sync archive drive-scsi0.img.fidx
2022-06-09T11:14:49-05:00: percentage done: 90.13% (17/19 groups, 1/8 snapshots in group #18)
2022-06-09T11:14:49-05:00: sync group vm/402 failed - creating chunk on store 'Backups' failed for a7e18fc65b4be70e3d50f05b70352b6956b08d65188e141aa9b47ace96ad823f - No such file or directory (os error 2)
2022-06-09T11:14:50-05:00: sync snapshot vm/403/2022-04-24T05:00:20Z
2022-06-09T11:14:50-05:00: sync archive qemu-server.conf.blob
2022-06-09T11:14:50-05:00: sync archive drive-scsi1.img.fidx
2022-06-09T11:14:52-05:00: percentage done: 95.39% (18/19 groups, 1/8 snapshots in group #19)
2022-06-09T11:14:52-05:00: sync group vm/403 failed - creating chunk on store 'Backups' failed for bb9f8df61474d25e71fa00722318cd387396ca1736605e1248821cc0de3d3af8 - No such file or directory (os error 2)
2022-06-09T11:14:52-05:00: Finished syncing namespace , current progress: 18 groups, 1 snapshots
2022-06-09T11:14:52-05:00: TASK ERROR: sync failed with some errors.
 
is it possible you deleted the sub-directories in the .chunks dir of your datastore? it should look like this:

Code:
$ ls /path/to/datastore/.chunks
0000
0001
0002
0003
0004
...
fffa
fffb
fffc
fffd
fffe
ffff
 
is it possible you deleted the sub-directories in the .chunks dir of your datastore? it should look like this:

Code:
$ ls /path/to/datastore/.chunks
0000
0001
0002
0003
0004
...
fffa
fffb
fffc
fffd
fffe
ffff
Yes, that is exactly what I did. I had let the drive run out of space and could not do garbage collection. I decided I wanted to start over so I deleted every file inside the .chunks directory.
 
Thank you for the clue. I was able to resolve my issue by deleting the datastore and recreating it.

Code:
root@alcaeus:/mnt/datastore/Backups/ct# proxmox-backup-manager datastore list
┌─────────┬────────────────────────┬─────────┐
│ name    │ path                   │ comment │
╞═════════╪════════════════════════╪═════════╡
│ Backups │ /mnt/datastore/Backups │         │
└─────────┴────────────────────────┴─────────┘
root@alcaeus:/mnt/datastore/Backups/ct# proxmox-backup-manager datastore remove Backups


Code:
root@alcaeus:/mnt# proxmox-backup-manager datastore create Backups /mnt/datastore/Backups
Chunkstore create: 1%
Chunkstore create: 2%
...
Chunkstore create: 99%
TASK OK
 
  • Like
Reactions: fabian
I've got the same error.
I didn't delete any chunk, what can i do ?
i use env vars
export PBS_REPOSITORY=
export PBS_PASSWORD=
export PBS_FINGERPRINT=

than i try to use
proxmox-backup-client backup tms_psql.pxar:/opt/data/tms

and i got

Code:
proxmox-backup-client backup tms_psql.pxar:/opt/data/tms
Starting backup: host/pbs-s1-2/2023-03-14T10:48:18Z
Client name: pbs-s1-2
Starting backup protocol: Tue Mar 14 13:48:18 2023
Downloading previous manifest (Tue Mar 14 13:48:14 2023)
Upload directory '/opt/data/tms' to '10.131.240.149:Backup-store-1' as tms_psql.pxar.didx
Error downloading .didx from previous manifest: Unable to open dynamic index "/opt/backup-store/host/pbs-s1-2/2023-03-14T10:48:14Z/tms_psql.pxar.didx" - No such file or directory (os error 2)
tms_psql.pxar: had to backup 9.697 MiB of 9.697 MiB (compressed 9.697 MiB) in 0.16s
tms_psql.pxar: average backup speed: 61.381 MiB/s
Uploaded backup catalog (84 B)
Duration: 0.25s
End Time: Tue Mar 14 13:48:18 2023

what is going on ?

proxmox-backup-client/stable,now 2.3.3-1 amd64 [installed]
proxmox-backup-server/stable,now 2.3.3-1 amd64 [installed]

cat /etc/os-release
Code:
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
NAME="Debian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

I've deleted Backup Group - error persist.
If i use one command twice i don't get error on the second run.
But if i run new command fot new backup i still got an error at first backup...
 
Last edited:
the previous snapshot references an index that is not there..

what does "/opt/backup-store/host/pbs-s1-2/2023-03-14T10:48:14Z/" contain on the PBS side? e.g., "ls -lha /opt/backup-store/host/pbs-s1-2/2023-03-14T10:48:14Z/" ?
 
ls -lha /opt/backup-store/host/pbs-s1-2/2023-03-14T10:48:14Z/
Code:
total 28K
drwxr-xr-x  2 backup backup 4.0K Mar 14 13:48 .
drwxr-xr-x 24 backup backup 4.0K Mar 14 13:59 ..
-rw-r--r--  1 backup backup 4.1K Mar 14 13:48 catalog.pcat1.didx
-rw-r--r--  1 backup backup 5.2K Mar 14 13:48 home_tms_iris-client.pxar.didx
-rw-r--r--  1 backup backup  392 Mar 14 13:48 index.json.blob
 
I need to backup 3 objects
proxmox-backup-client backup tms_psql.pxar:/opt/data/tms
proxmox-backup-client backup etc_iris_.pxar:/opt/data/etc/iris
proxmox-backup-client backup home_tms_iris-client.pxar:/opt/data/home/tms/iris-client
But i don't understand what is going on. First object creates a folder and second one is not finding neede info in it ?
 
you can upload all three in one go:

proxmox-backup-client backup first.pxar:/first/path second.pxar:/second/path ...

could you post the manifest contents as well? (index.json.blob)

proxmox-backup-debug inspect file --decode - /opt/backup-store/host/pbs-s1-2/2023-03-14T10:48:14Z/index.json.blob
 
I don't have that folder. can i choose any ?
cd /opt/backup-store/host/pbs-s1-2/2023-03-14T10:55:22Z/
root@pbs-s1-s2:/opt/backup-store/host/pbs-s1-2/2023-03-14T10:55:22Z# proxmox-backup-debug inspect file --decode - index.json.blob
Code:
{
  "backup-id": "pbs-s1-2",
  "backup-time": 1678791322,
  "backup-type": "host",
  "files": [
    {
      "crypt-mode": "none",
      "csum": "3bde114c57f32bbdb575265f5b2500673b46ba02b572fa5437c67a83fb2148b3",
      "filename": "tms_psql.pxar.didx",
      "size": 10168373
    },
    {
      "crypt-mode": "none",
      "csum": "3bfc7a8213fd32d9045d7449db74dbb6787ed4bd48700e0bbd857df67a9e794a",
      "filename": "catalog.pcat1.didx",
      "size": 84
    }
  ],
  "signature": null,
  "unprotected": {
    "chunk_upload_stats": {
      "compressed_size": 96,
      "count": 1,
      "duplicates": 1,
      "size": 402
    },
    "verify_state": {
      "state": "ok",
      "upid": "UPID:pbs-s1-2:00038BE6:1B5CE0D8:00000017:641053F3:verify:Backup\\x2dstore\\x2d1:root@pam:"
    }
  }
}size: 478
encryption: none
 
did you prune in the meantime? does a backup now work? I would suggest backing up all three directories in one go, or using different backup groups for each directory (you can specify --backup-id , it defaults to the hostname when doing a host backup).
 
I did not prune if it not scheduled by default anywhere.
Backups seems to work but error is still exists
Using onetimego seems to help. Eror is gone. Bit i can't understand why on one PBS server this works

Code:
ERRORS=/root/errors.log
BACKUPPATH=/opt/data
rm $ERRORS
DBNAME1=tms

proxmox-backup-client backup "$DBNAME1"_psql.pxar:$BACKUPPATH/$DBNAME1 2>>$ERRORS
sleep 2
proxmox-backup-client backup etc_iris_.pxar:$BACKUPPATH/etc/iris 2>>$ERRORS
sleep 2
proxmox-backup-client backup home_tms_iris-client.pxar:$BACKUPPATH/home/tms/iris-client 2>>$ERRORS

and errors.log after this script is empty.

But on the other server i need to use
Code:
proxmox-backup-client backup "$DBNAME1"_psql.pxar:$BACKUPPATH/$DBNAME1 etc_iris_.pxar:$BACKUPPATH/etc/iris home_tms_iris-client.pxar:$BACKUPPATH/home/tms/iris-client 2>>$ERRORS

Or i'll get error, and even using one-tim-go command i got in errors.log this

Code:
Starting backup: host/pbs-s1-2/2023-03-15T08:17:43Z
Client name: pbs-s1-2
Starting backup protocol: Wed Mar 15 11:17:43 2023
Downloading previous manifest (Wed Mar 15 11:15:19 2023)
Upload directory '/opt/data/tms' to '10.131.240.149:Backup-store-1' as tms_psql.pxar.didx
tms_psql.pxar: had to backup 3.182 MiB of 9.784 MiB (compressed 3.182 MiB) in 0.09s
tms_psql.pxar: average backup speed: 35.08 MiB/s
tms_psql.pxar: backup was done incrementally, reused 6.602 MiB (67.5%)
Upload directory '/opt/data/etc/iris' to '10.131.240.149:Backup-store-1' as etc_iris_.pxar.didx
etc_iris_.pxar: had to backup 9.979 KiB of 9.979 KiB (compressed 9.99 KiB) in 0.00s
etc_iris_.pxar: average backup speed: 3.474 MiB/s
Upload directory '/opt/data/home/tms/iris-client' to '10.131.240.149:Backup-store-1' as home_tms_iris-client.pxar.didx
home_tms_iris-client.pxar: had to backup 2.508 MiB of 72.044 MiB (compressed 2.508 MiB) in 0.31s
home_tms_iris-client.pxar: average backup speed: 8.148 MiB/s
home_tms_iris-client.pxar: backup was done incrementally, reused 69.536 MiB (96.5%)
Uploaded backup catalog (187 B)
Duration: 0.71s
End Time: Wed Mar 15 11:17:44 2023

Is it the error ? Why is it there ? What changed at version of what app ? What to check and downgrade ?
 
Ok! It seems this version works good
proxmox-backup-client/now 2.1.1-1 amd64 [installed,local]
And this one is not
proxmox-backup-client/stable,now 2.3.3-1 amd64 [installed]

apt remove proxmox-backup-client
apt install proxmox-backup-client=2.1.1-1

Solved the problems..
 
yes, there definitely also is an issue with not handling incremental host backups where the name/.. of pxar archives change in-between snapshots, just filed:

https://bugzilla.proxmox.com/show_bug.cgi?id=4591

note that the backup itself works fine, it just prints an error that actually isn't really one..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!