PBS client not detecting previous manifest

edocod

Member
Feb 15, 2021
8
0
21
28
Hello,
I have a remote location for my backups, with a fairly slow downlink.

I'm backing up individual data folders of Nextcloud with this command:

Code:
proxmox-backup-client backup --crypt-mode=none userdata-xxxxxx.pxar:/mnt/nextcloud_data/xxxxxx userdata-yyyyyy.pxar:/mnt/nextcloud_data/yyyyyy --backup-id nextcloud_data --change-detection-mode=metadata --repository pbsuser@pbs@backup-location:ds

Since downlink is too slow, i copied the entire data to an hard drive and then went to the location to connect it directly to the PBS machine.
With the drive connected, i then proceeded to run the exact same command, but with "::1" as the IP address.

Backup was successful and indeed now i have a backup with all the archives, as shown from the control panel. Running the backup again locally makes pbs skip most of the files and only a couple of kbs get uploaded.

1755774680972.png

However, when running on the remote host, i get the following error:

Code:
Previous manifest does not contain an archive called 'userdata-xxxxxx.mpxar.didx', skipping download..

And it proceeds to reupload the entire data from scratch. I thought it would at least reuse the blocks from the datastore, but it's actually reuploading every single file.
The credentials are the same, so i assume if it can access the didx files locally, i can't see why it wouldn't do it remotely.

On the servers, i see no errors:

Code:
2025-08-21T12:53:31+02:00: starting new backup on datastore 'cipressa-datastore' from xxxxxxx: "host/nextcloud_data/2025-08-21T10:52:14Z"
2025-08-21T12:53:31+02:00: protocol upgrade done
2025-08-21T12:53:31+02:00: GET /previous_backup_time
2025-08-21T12:53:31+02:00: GET /previous
2025-08-21T12:53:31+02:00: download 'index.json.blob' from previous backup 'host/nextcloud_data/2025-08-20T22:11:45Z'.
2025-08-21T12:53:31+02:00: GET /previous_backup_time
2025-08-21T12:53:32+02:00: POST /dynamic_index
2025-08-21T12:53:32+02:00: POST /dynamic_index
2025-08-21T12:53:32+02:00: created new dynamic index 1 ("host/nextcloud_data/2025-08-21T10:52:14Z/userdata-xxxxxxx.ppxar.didx")
2025-08-21T12:53:32+02:00: created new dynamic index 2 ("host/nextcloud_data/2025-08-21T10:52:14Z/userdata-xxxxxxx.mpxar.didx")

Of course, if i had to retry it now it would fail again. But even if there was no "didx" available, wouldn't then the server try to deduplicate the chunks anyway?

The upload I see is pretty much all the data, minus the compression. It would be nice to be able to reuse what i already brought with my car to the server :)
 
2025-08-21T12:53:31+02:00: download 'index.json.blob' from previous backup 'host/nextcloud_data/2025-08-20T22:11:45Z'.
Download and inspect the index.json for that previous snapshot via the WebUI. That contains the list of expected index files in that snapshot. Maybe the index.json file is not correct/matching the contents?
 
Download and inspect the index.json for that previous snapshot via the WebUI. That contains the list of expected index files in that snapshot. Maybe the index.json file is not correct/matching the contents?

Hello.
Looks like the index.json.blob is missing a couple of didx files.

Perhaps, an interrupted backup? Can i create the index file so they are recognized properly?

I assume i need to know the checksum format and a way to convert the .json to a binary blob. So I can check if it will accept the backups in this way.
 
Last edited:
thank you did something wrong during data transfer i guess.... Please use a proper transfer by e.g. syncing to a removable datastore.
You mean,
1. Install a pbs locally
2. Backup on that local pbs
3. Move the entire datastore contents to a removable storage
4. Connect removable storage to the destination pbs
5. Copy the datastore contents to the destination datastore?
 
No, what I mean is setup the removable datastore for on- and offsite PBS instance, transfer the snapshot or the whole group from the local to the removable datastore via a sync job, bring the removable datastore to the remote, attach it there and sync to that local datastore.
 
This raises a further question, however: assume my backup is interrupted mid-point, and as such some of the didx archives are missing from the latest backup. Does this mean no deduplication/chunk reusage will happen?
 
The backup snapshot will not be complete in that case, the server will remove all the index files again (however leaving chunks behind, which only might get cleaned up by the next garbage collection, if not in use by other backup snapshots). An interrupted backup snapshot will therefore never be used as previous backup snapshot (as it is not persisted).