Sync Job from PBS 4.0.14 Server to PBS 3.4 Server Succeeds, but No Data Transferred. Help with Debugging?

Sep 1, 2022
463
179
48
41
I originally posted this troubleshooting message as part of this thread: https://forum.proxmox.com/threads/h...n-api-permissions-not-sure-what-to-do.169584/

However, that thread was more about how to set up remote sync at all. After thinking about it, I decided to break this out into a separate troubleshooting thread to make it easier to find for anyone having a similar issue.

I'm getting this notice on a when the task succeeds (TASK OK):
Code:
2025-08-17T15:55:06-05:00: Starting datastore sync job 'Tuxis-AndromedaStore2:DB2685_AndromedaStore2:andromedaStore1:Encrypted:s-d348470b-04e8'
2025-08-17T15:55:06-05:00: sync datastore 'andromedaStore1' to 'Tuxis-AndromedaStore2/DB2685_AndromedaStore2'
2025-08-17T15:55:07-05:00: Summary: sync job found no new data to push
2025-08-17T15:55:07-05:00: sync job 'Tuxis-AndromedaStore2:DB2685_AndromedaStore2:andromedaStore1:Encrypted:s-d348470b-04e8' end
2025-08-17T15:55:07-05:00: queued notification (id=7bf63506-dc33-4041-b7a1-c45a074bc374)
2025-08-17T15:55:07-05:00: TASK OK

I've got about 300 GB of data on my local PBS that needs to sync to the remote. Clearly, I've misconfigured something. I'm not sure if it's the permissions or not.

I have a Local PBS User on my Local PBS with RemoteSyncPushOperator on path /remote.

  • This user only exists to do the push operation, and I have another local admin user that I use to manage the whole server.
  • I know I could theoretically narrow the /remote path down to the actual datastore over there, but I've only got one namespace and one datastore over there, and I'm trying to keep the setup simple while I'm getting it working.
I have a Remote PBS User on my Remote PBS with DatastoreBackup on $PathToDatastoreOnRemotePBS.

That all seems correct. It's possible I've misunderstood something, but I think the issue is probably in my Sync Job configuration.

Code:
root@andromeda0:/etc/proxmox-backup# batcat sync.cfg
───────┬────────────────────────────────────────────────────────────────────────────────────────────
       │ File: sync.cfg
───────┼────────────────────────────────────────────────────────────────────────────────────────────
   1   │ sync: s-d348470b-04e8
   2   │     comment Push Local Datastore to Remote Tuxis - Daily 7a - Encrypted Only, Verified Only
   3   │     encrypted-only true
   4   │     ns Encrypted
   5   │     owner $API_TOKEN
   6   │     remote $RemoteName
   7   │     remote-ns
   8   │     remote-store $RemoteStore
   9   │     schedule 07:00
  11   │     store $LocalStore
  12   │     sync-direction push
  13   │     verified-only true
───────┴─────────────────────────────────────────────────────────────────────────────────────────────────

Anybody see anything obvious that's missing?
I'm kind of wondering if I misunderstood how to set the permissions/path for the local PBS user.
It connects just fine to the local server, but doesn't find anything to push.

Thanks!
 
I have a Local PBS User on my Local PBS with RemoteSyncPushOperator on path /remote.
  • This user only exists to do the push operation, and I have another local admin user that I use to manage the whole server.
  • I know I could theoretically narrow the /remote path down to the actual datastore over there, but I've only got one namespace and one datastore over there, and I'm trying to keep the setup simple while I'm getting it working.
Does this user also have the permissions to read the source (local) datastore contents you want to sync to the remote? As stated in the docs:
At least Datastore.Read and Datastore.Audit on the local source datastore namespace

So I suggest to add the DatastoreReader role on the source datastore path /datastore/<your-source-datastore> to your sync user. Otherwise it cannot read the source contents.
 
  • Like
Reactions: SInisterPisces
Does this user also have the permissions to read the source (local) datastore contents you want to sync to the remote? As stated in the docs:


So I suggest to add the DatastoreReader role on the source datastore path /datastore/<your-source-datastore> to your sync user. Otherwise it cannot read the source contents.
Thank you! This was the bit I was missing. My local push user only had the RemoteSyncPushOperator role on /remote.

I'm still seeing the same issue even after adding the new permission.
Is it possible to use the CLI tool proxmox-backup-manager to run the job with a verbose output so I can see what it's trying to read (and finding nothing)?

Documentation Request.
It would be great if your instructions quoted above could be added to the documentation. Like you said, the information is there, but it's a bit confusing if you don't interact a lot with the permissions system (or at least it was for me). I boggled at the docs for quite a while and still didn't quite get it right.
Would a feature request on Bugzilla be helpful to log the request?
 
owner $API_TOKEN
But you are using a token, not a user for the sync? So you will have to set the permissions on the token. Although I would recommend to use a user here instead of a token.

'm still seeing the same issue even after adding the new permission.
Is it possible to use the CLI tool proxmox-backup-manager to run the job with a verbose output so I can see what it's trying to read (and finding nothing)?
this is not implemented (yet), see https://bugzilla.proxmox.com/show_bug.cgi?id=5934

Would a feature request on Bugzilla be helpful to log the request?
Yes, please do so, thanks!
 
But you are using a token, not a user for the sync? So you will have to set the permissions on the token. Although I would recommend to use a user here instead of a token.
I am using a token. Sorry. I should have posted more info on my exact setup. I needed to figure out how to redact things.

Here's the sync job:
1755799979787.png

Here's what's set up right now for users/API tokens/permissions.
1755799340119.png
  • The 1974 user exists only to use the tuxisPushToken for this sync job. The user is Enabled.
  • The 1974 user has a single API token, the tuxisPushToken. It is enabled.
  • The tuxisPushToken has DatastoreReader permissions on the local datastore, and RemoteSyncPushOperator permissions on the "/remote" path.
The docs for PBS recommend using API tokens where possible. Why is a Username/Password a better option here?

Thanks for the link. I'll start tracking this one; glad it's being worked on. :)

Yes, please do so, thanks!
I'll file one after I've got this working. I might have better feedback once I understand what's going wrong for me.
 
Maybe best if I show you a concrete example, so you can configure based on that. Given following configuration on the PBS host which is the source of the push sync job:
Code:
remote.cfg:
-----------
remote: push-test
    auth-id remoteuser@pbs!token
    host <remote>
    password <secret>

sync.cfg:
---------
sync: pushtest
    encrypted-only true
    ns encrypted
    owner sync-user@pbs
    remote push-test
    remote-ns
    remote-store targetstore
    remove-vanished false
    run-on-mount false
    store pushsource
    sync-direction push
    verified-only true

acl.cfg:
--------
acl:1:/datastore/pushsource/encrypted:sync-user@pbs:DatastoreReader
acl:1:/remote:sync-user@pbs:RemoteSyncPushOperator

The remote is configured with an API token, which is setup on the remote PBS instance accordingly and has the permissions to list datastore contents, perform backups and prune contents there. It is set up as API token so it can be invalidated easily on the remote host without removing the user.

The source side has a namespace encyrpted containing snapshots which are both, encrypted and verified (as that is what you set as filtering option as well). The sync job is assigned to the local user/owner sync-user@pbs. This users purpose is solely to control what contents the sync job can see (by setting the permissions on the datastore) and what remote it can use (allowing to push to all remotes in this example). Since this is NOT the user/token used to talk to the remote (that is the user/token configured in the remote), it makes sense to configure a user here. The ACLs are therefore set accordingly for this user.

By this I could successfully sync an encrypted and verified snapshot to the remote, which previously did not contain that backup group/snapshot.

Hope this helps!
 
Maybe best if I show you a concrete example, so you can configure based on that. Given following configuration on the PBS host which is the source of the push sync job:
* * *
The remote is configured with an API token, which is setup on the remote PBS instance accordingly and has the permissions to list datastore contents, perform backups and prune contents there. It is set up as API token so it can be invalidated easily on the remote host without removing the user.

The source side has a namespace encyrpted containing snapshots which are both, encrypted and verified (as that is what you set as filtering option as well). The sync job is assigned to the local user/owner sync-user@pbs. This users purpose is solely to control what contents the sync job can see (by setting the permissions on the datastore) and what remote it can use (allowing to push to all remotes in this example). Since this is NOT the user/token used to talk to the remote (that is the user/token configured in the remote), it makes sense to configure a user here. The ACLs are therefore set accordingly for this user.

Thanks! I'm going to take a closer look at this in the morning. I really appreciate it.

A question about that emphasized part: is there a role that corresponds to all those permissions? Right now, on the remote PBS, my token only has DatastoreBackup as a Role, which only includes Datastore.Backup permission.

Also, I set up prune and garbage collection jobs on the remote PBS server, because I thought I had to.
Are you suggesting that I have my local PBS control the pruning on the remote PBS? Then the remote PBS would just be responsible for garbage collection, right?
 
Thanks! I'm going to take a closer look at this in the morning. I really appreciate it.

A question about that emphasized part: is there a role that corresponds to all those permissions? Right now, on the remote PBS, my token only has DatastoreBackup as a Role, which only includes Datastore.Backup permission.

Also, I set up prune and garbage collection jobs on the remote PBS server, because I thought I had to. Are you suggesting that I have my local PBS control the pruning on the remote PBS? Then the remote PBS would just be responsible for garbage collection, right?
DatastoreAdmin or DatastorePowerUser on the datastore path, see https://pbs.proxmox.com/docs/config/acl/man5.html#roles

Also, I set up prune and garbage collection jobs on the remote PBS server, because I thought I had to. Are you suggesting that I have my local PBS control the pruning on the remote PBS? Then the remote PBS would just be responsible for garbage collection, right?
Yes, GC has to be done by the remote, the rest depends on what you want to achieve. You can either do the remove vanished to just keep the datastores in sync or use a dedicated prune setting on the remote.
 
Maybe best if I show you a concrete example, so you can configure based on that. Given following configuration on the PBS host which is the source of the push sync job:
Code:
remote.cfg:
-----------
remote: push-test
    auth-id remoteuser@pbs!token
    host <remote>
    password <secret>

sync.cfg:
---------
sync: pushtest
    encrypted-only true
    ns encrypted
    owner sync-user@pbs
    remote push-test
    remote-ns
    remote-store targetstore
    remove-vanished false
    run-on-mount false
    store pushsource
    sync-direction push
    verified-only true

acl.cfg:
--------
acl:1:/datastore/pushsource/encrypted:sync-user@pbs:DatastoreReader
acl:1:/remote:sync-user@pbs:RemoteSyncPushOperator

The remote is configured with an API token, which is setup on the remote PBS instance accordingly and has the permissions to list datastore contents, perform backups and prune contents there. It is set up as API token so it can be invalidated easily on the remote host without removing the user.

The source side has a namespace encyrpted containing snapshots which are both, encrypted and verified (as that is what you set as filtering option as well). The sync job is assigned to the local user/owner sync-user@pbs. This users purpose is solely to control what contents the sync job can see (by setting the permissions on the datastore) and what remote it can use (allowing to push to all remotes in this example). Since this is NOT the user/token used to talk to the remote (that is the user/token configured in the remote), it makes sense to configure a user here. The ACLs are therefore set accordingly for this user.

By this I could successfully sync an encrypted and verified snapshot to the remote, which previously did not contain that backup group/snapshot.

Hope this helps!

I finally got a chance to come back to this, and just re-adjusted my settings. I'm just ran the sync job, and I'm sitting here watching it work. Thanks so much.
(I assume it's working. It's just sitting here saying "Found X groups to sync..."). But it's only been running for two minutes and that's probably normal. i've never actually done one of these.)
...And there we go. Progress readouts! (Though, I'm used to the task viewer showing me network throughput. This is definitely different.)

Thank you again. So much.

Having gotten this working, I can see where I got tripped up before. These might be areas where it would be useful to consider adjusting the documentation. (You told me all this above, but it wasn't clear just from looking at the docs, at least to my lizard brain. ;) )
  1. Local Server
    1. Tokens should not be used on the local server to access the local server. (Giving this warning explicitly would have cleared up a lot of confusion and error for me.)
    2. Permissions should be set for a dedicated sync user. I will probably refine this later, but I have two permissions (2 rows) set: Datastore Reader on the local datastore, and RemoteSyncPushOperator on the /remote path. I really need to learn how to do custom roles/individual permissions.
  2. Remote Server
    1. An API Token should be created to receive data from the local PBS.
    2. For the moment, I have that token configured only with the DatastoreBackup role. So, if I understand correctly, the local PBS can't trigger prune jobs on the remote PBS?
    3. GC and Prune Jobs are configured on the remote server. One thing I'm still a bit confused about. I would not have understood that the sync user on the local PBS can trigger prunes on the remote server unless you mentioned it. From the way the UIs look, I assumed each server managed prunes independently. Is there a way to tell whether the local sync source is configured to control remote pruning or not? Right now, I'm assuming that if there's no prune rules set on the remote, the local PBS' prune rules control by default (because the local host gets pruned and those changes get synced). It's not really clear to me what happens automatically when the local and remote nodes have two separate prune schedules.
I can still file a bug report/enhancement request for the docs if you think that would still be helpful. :)