Proxmox Backup Server 4.0 BETA released!

Is there any plan to include S3 tech to do Push Sync Job?
That works already, if we mean the same thing.

Might be just a tiny bit confusing because the S3 datastore counts as "local" one as it's locally managed, so you do not use a remote sync but a local sync job type, the data still can be "pushed" to S3 that way though.
 
As i said, one more problem is if you are hosting your own minio or garage or whatever , there isn't a region.
Most devs that reviewied and testing this feature used MinIO and Ceph RGW, there you normally can skip setting the region, it's optional in the UI.
If something doesn't work for you with either one of those then please open a new thread and we can look into it.
 
  • Like
Reactions: Johannes S
That works already, if we mean the same thing.

Might be just a tiny bit confusing because the S3 datastore counts as "local" one as it's locally managed, so you do not use a remote sync but a local sync job type, the data still can be "pushed" to S3 that way though.
Oh! I see... Just like any normal local storage... I was a bit confusion... Thanks for the clarification.
 
  • Like
Reactions: Johannes S
Having an issue getting Backblaze B2 setup as an S3 Datastore.

After creating the S3 Endpoint, the following test appears to be successful (I see the .s3-client-test file in the root of the bucket):

Bash:
root@pbs3:~# proxmox-backup-manager s3 check backblaze-b2 [bucket-name] --store-prefix /

But when adding the S3 Datastore, I get the following error:

Code:
2025-07-26T12:11:51-05:00: Chunkstore create: 1%
2025-07-26T12:11:51-05:00: Chunkstore create: 2%
2025-07-26T12:11:51-05:00: Chunkstore create: 3%
[...snip...]
2025-07-26T12:11:53-05:00: Chunkstore create: 97%
2025-07-26T12:11:54-05:00: Chunkstore create: 98%
2025-07-26T12:11:54-05:00: Chunkstore create: 99%
2025-07-26T12:11:54-05:00: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
    <Code>NotImplemented</Code>
    <Message>A header you provided implies functionality that is not implemented</Message>
</Error>

2025-07-26T12:11:54-05:00: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
    <Code>NotImplemented</Code>
    <Message>A header you provided implies functionality that is not implemented</Message>
</Error>

2025-07-26T12:11:54-05:00: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
    <Code>NotImplemented</Code>
    <Message>A header you provided implies functionality that is not implemented</Message>
</Error>

2025-07-26T12:11:54-05:00: TASK ERROR: access time safety check failed: failed to upload chunk to s3 backend: chunk upload failed: unexpected status code 501 Not Implemented

From what I understand, the datastore name is used as a prefix for objects. I checked the bucket after the above failure, and do see it was able to create the backblaze-b2/.in-use object. I also did an endpoint test with the backblaze-b2 prefix, and it successfully created the backblaze-b2/.s3-client-test object.

Bash:
root@pbs3:~# proxmox-backup-manager s3 check backblaze-b2 [bucket-name] --store-prefix /backblaze-b2

The access time safety check failure and accompanying 501 Not Implemented error returned by B2 seems to indicate PBS may be dependent on an S3 feature that is not implemented in B2. I skimmed the B2 docs, and didn't see anything in their list of unsupported features that might be related to the above error. Hoping to get some insight on what might be the problem.

Cheers
 
The access time safety check failure and accompanying 501 Not Implemented error returned by B2 seems to indicate PBS may be dependent on an S3 feature that is not implemented in B2. I skimmed the B2 docs, and didn't see anything in their list of unsupported features that might be related to the above error. Hoping to get some insight on what might be the problem.

Cheers
I just successfully added the S3 Datastore using the CLI with --tuning 'gc-atime-safety-check=false', but I'm not sure that is safe for S3 Datastores. If if is safe, it might be helpful to expose that option in the UI.
 
I just successfully added the S3 Datastore using the CLI with --tuning 'gc-atime-safety-check=false', but I'm not sure that is safe for S3 Datastores. If if is safe, it might be helpful to expose that option in the UI.
Spoke too soon. It doesn't actually work. Here are the logs from an attempt to sync to S3.

Code:
2025-07-26T14:40:16-05:00: Starting datastore sync job '-:local-ssd-zfs:backblaze-b2::s-1ca14af2-37a8'
2025-07-26T14:40:16-05:00: sync datastore 'backblaze-b2' from 'local-ssd-zfs'
2025-07-26T14:40:16-05:00: ----
2025-07-26T14:40:16-05:00: Syncing datastore 'local-ssd-zfs', root namespace into datastore 'backblaze-b2', root namespace
2025-07-26T14:40:16-05:00: found 23 groups to sync (out of 23 total)
2025-07-26T14:40:17-05:00: sync snapshot ct/200/2024-12-29T06:00:00Z
2025-07-26T14:40:17-05:00: sync archive pct.conf.blob
2025-07-26T14:40:17-05:00: sync archive root.pxar.didx
2025-07-26T14:40:21-05:00: removing backup snapshot "/var/cache/proxmox-s3/backblaze-b2/ct/200/2024-12-29T06:00:00Z"
2025-07-26T14:40:21-05:00: percentage done: 0.23% (0/23 groups, 1/19 snapshots in group #1)
2025-07-26T14:40:21-05:00: sync group ct/200 failed - failed to upload chunk to s3 backend
[...snip...]
2025-07-26T14:41:56-05:00: Finished syncing root namespace, current progress: 22 groups, 1 snapshots
2025-07-26T14:41:56-05:00: queued notification (id=08315113-a41d-4b9c-91cb-b9a4a2554153)
2025-07-26T14:41:56-05:00: TASK ERROR: sync failed with some errors.