Hello,
I have a remote location for my backups, with a fairly slow downlink.
I'm backing up individual data folders of Nextcloud with this command:
Since downlink is too slow, i copied the entire data to an hard drive and then went to the location to connect it directly to the PBS machine.
With the drive connected, i then proceeded to run the exact same command, but with "::1" as the IP address.
Backup was successful and indeed now i have a backup with all the archives, as shown from the control panel. Running the backup again locally makes pbs skip most of the files and only a couple of kbs get uploaded.

However, when running on the remote host, i get the following error:
And it proceeds to reupload the entire data from scratch. I thought it would at least reuse the blocks from the datastore, but it's actually reuploading every single file.
The credentials are the same, so i assume if it can access the didx files locally, i can't see why it wouldn't do it remotely.
On the servers, i see no errors:
Of course, if i had to retry it now it would fail again. But even if there was no "didx" available, wouldn't then the server try to deduplicate the chunks anyway?
The upload I see is pretty much all the data, minus the compression. It would be nice to be able to reuse what i already brought with my car to the server
I have a remote location for my backups, with a fairly slow downlink.
I'm backing up individual data folders of Nextcloud with this command:
Code:
proxmox-backup-client backup --crypt-mode=none userdata-xxxxxx.pxar:/mnt/nextcloud_data/xxxxxx userdata-yyyyyy.pxar:/mnt/nextcloud_data/yyyyyy --backup-id nextcloud_data --change-detection-mode=metadata --repository pbsuser@pbs@backup-location:ds
Since downlink is too slow, i copied the entire data to an hard drive and then went to the location to connect it directly to the PBS machine.
With the drive connected, i then proceeded to run the exact same command, but with "::1" as the IP address.
Backup was successful and indeed now i have a backup with all the archives, as shown from the control panel. Running the backup again locally makes pbs skip most of the files and only a couple of kbs get uploaded.

However, when running on the remote host, i get the following error:
Code:
Previous manifest does not contain an archive called 'userdata-xxxxxx.mpxar.didx', skipping download..
And it proceeds to reupload the entire data from scratch. I thought it would at least reuse the blocks from the datastore, but it's actually reuploading every single file.
The credentials are the same, so i assume if it can access the didx files locally, i can't see why it wouldn't do it remotely.
On the servers, i see no errors:
Code:
2025-08-21T12:53:31+02:00: starting new backup on datastore 'cipressa-datastore' from xxxxxxx: "host/nextcloud_data/2025-08-21T10:52:14Z"
2025-08-21T12:53:31+02:00: protocol upgrade done
2025-08-21T12:53:31+02:00: GET /previous_backup_time
2025-08-21T12:53:31+02:00: GET /previous
2025-08-21T12:53:31+02:00: download 'index.json.blob' from previous backup 'host/nextcloud_data/2025-08-20T22:11:45Z'.
2025-08-21T12:53:31+02:00: GET /previous_backup_time
2025-08-21T12:53:32+02:00: POST /dynamic_index
2025-08-21T12:53:32+02:00: POST /dynamic_index
2025-08-21T12:53:32+02:00: created new dynamic index 1 ("host/nextcloud_data/2025-08-21T10:52:14Z/userdata-xxxxxxx.ppxar.didx")
2025-08-21T12:53:32+02:00: created new dynamic index 2 ("host/nextcloud_data/2025-08-21T10:52:14Z/userdata-xxxxxxx.mpxar.didx")
Of course, if i had to retry it now it would fail again. But even if there was no "didx" available, wouldn't then the server try to deduplicate the chunks anyway?
The upload I see is pretty much all the data, minus the compression. It would be nice to be able to reuse what i already brought with my car to the server