Proxmox Backup Server 4.0 BETA released!

Is there any plan to include S3 tech to do Push Sync Job?
That works already, if we mean the same thing.

Might be just a tiny bit confusing because the S3 datastore counts as "local" one as it's locally managed, so you do not use a remote sync but a local sync job type, the data still can be "pushed" to S3 that way though.
 
As i said, one more problem is if you are hosting your own minio or garage or whatever , there isn't a region.
Most devs that reviewied and testing this feature used MinIO and Ceph RGW, there you normally can skip setting the region, it's optional in the UI.
If something doesn't work for you with either one of those then please open a new thread and we can look into it.
 
  • Like
Reactions: Johannes S
That works already, if we mean the same thing.

Might be just a tiny bit confusing because the S3 datastore counts as "local" one as it's locally managed, so you do not use a remote sync but a local sync job type, the data still can be "pushed" to S3 that way though.
Oh! I see... Just like any normal local storage... I was a bit confusion... Thanks for the clarification.
 
  • Like
Reactions: Johannes S
Having an issue getting Backblaze B2 setup as an S3 Datastore.

After creating the S3 Endpoint, the following test appears to be successful (I see the .s3-client-test file in the root of the bucket):

Bash:
root@pbs3:~# proxmox-backup-manager s3 check backblaze-b2 [bucket-name] --store-prefix /

But when adding the S3 Datastore, I get the following error:

Code:
2025-07-26T12:11:51-05:00: Chunkstore create: 1%
2025-07-26T12:11:51-05:00: Chunkstore create: 2%
2025-07-26T12:11:51-05:00: Chunkstore create: 3%
[...snip...]
2025-07-26T12:11:53-05:00: Chunkstore create: 97%
2025-07-26T12:11:54-05:00: Chunkstore create: 98%
2025-07-26T12:11:54-05:00: Chunkstore create: 99%
2025-07-26T12:11:54-05:00: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
    <Code>NotImplemented</Code>
    <Message>A header you provided implies functionality that is not implemented</Message>
</Error>

2025-07-26T12:11:54-05:00: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
    <Code>NotImplemented</Code>
    <Message>A header you provided implies functionality that is not implemented</Message>
</Error>

2025-07-26T12:11:54-05:00: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
    <Code>NotImplemented</Code>
    <Message>A header you provided implies functionality that is not implemented</Message>
</Error>

2025-07-26T12:11:54-05:00: TASK ERROR: access time safety check failed: failed to upload chunk to s3 backend: chunk upload failed: unexpected status code 501 Not Implemented

From what I understand, the datastore name is used as a prefix for objects. I checked the bucket after the above failure, and do see it was able to create the backblaze-b2/.in-use object. I also did an endpoint test with the backblaze-b2 prefix, and it successfully created the backblaze-b2/.s3-client-test object.

Bash:
root@pbs3:~# proxmox-backup-manager s3 check backblaze-b2 [bucket-name] --store-prefix /backblaze-b2

The access time safety check failure and accompanying 501 Not Implemented error returned by B2 seems to indicate PBS may be dependent on an S3 feature that is not implemented in B2. I skimmed the B2 docs, and didn't see anything in their list of unsupported features that might be related to the above error. Hoping to get some insight on what might be the problem.

Cheers
 
The access time safety check failure and accompanying 501 Not Implemented error returned by B2 seems to indicate PBS may be dependent on an S3 feature that is not implemented in B2. I skimmed the B2 docs, and didn't see anything in their list of unsupported features that might be related to the above error. Hoping to get some insight on what might be the problem.

Cheers
I just successfully added the S3 Datastore using the CLI with --tuning 'gc-atime-safety-check=false', but I'm not sure that is safe for S3 Datastores. If if is safe, it might be helpful to expose that option in the UI.
 
I just successfully added the S3 Datastore using the CLI with --tuning 'gc-atime-safety-check=false', but I'm not sure that is safe for S3 Datastores. If if is safe, it might be helpful to expose that option in the UI.
Spoke too soon. It doesn't actually work. Here are the logs from an attempt to sync to S3.

Code:
2025-07-26T14:40:16-05:00: Starting datastore sync job '-:local-ssd-zfs:backblaze-b2::s-1ca14af2-37a8'
2025-07-26T14:40:16-05:00: sync datastore 'backblaze-b2' from 'local-ssd-zfs'
2025-07-26T14:40:16-05:00: ----
2025-07-26T14:40:16-05:00: Syncing datastore 'local-ssd-zfs', root namespace into datastore 'backblaze-b2', root namespace
2025-07-26T14:40:16-05:00: found 23 groups to sync (out of 23 total)
2025-07-26T14:40:17-05:00: sync snapshot ct/200/2024-12-29T06:00:00Z
2025-07-26T14:40:17-05:00: sync archive pct.conf.blob
2025-07-26T14:40:17-05:00: sync archive root.pxar.didx
2025-07-26T14:40:21-05:00: removing backup snapshot "/var/cache/proxmox-s3/backblaze-b2/ct/200/2024-12-29T06:00:00Z"
2025-07-26T14:40:21-05:00: percentage done: 0.23% (0/23 groups, 1/19 snapshots in group #1)
2025-07-26T14:40:21-05:00: sync group ct/200 failed - failed to upload chunk to s3 backend
[...snip...]
2025-07-26T14:41:56-05:00: Finished syncing root namespace, current progress: 22 groups, 1 snapshots
2025-07-26T14:41:56-05:00: queued notification (id=08315113-a41d-4b9c-91cb-b9a4a2554153)
2025-07-26T14:41:56-05:00: TASK ERROR: sync failed with some errors.
 
Having an issue getting Backblaze B2 setup as an S3 Datastore.

After creating the S3 Endpoint, the following test appears to be successful (I see the .s3-client-test file in the root of the bucket):

Bash:
root@pbs3:~# proxmox-backup-manager s3 check backblaze-b2 [bucket-name] --store-prefix /

But when adding the S3 Datastore, I get the following error:

Code:
2025-07-26T12:11:51-05:00: Chunkstore create: 1%
2025-07-26T12:11:51-05:00: Chunkstore create: 2%
2025-07-26T12:11:51-05:00: Chunkstore create: 3%
[...snip...]
2025-07-26T12:11:53-05:00: Chunkstore create: 97%
2025-07-26T12:11:54-05:00: Chunkstore create: 98%
2025-07-26T12:11:54-05:00: Chunkstore create: 99%
2025-07-26T12:11:54-05:00: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
    <Code>NotImplemented</Code>
    <Message>A header you provided implies functionality that is not implemented</Message>
</Error>

2025-07-26T12:11:54-05:00: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
    <Code>NotImplemented</Code>
    <Message>A header you provided implies functionality that is not implemented</Message>
</Error>

2025-07-26T12:11:54-05:00: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
    <Code>NotImplemented</Code>
    <Message>A header you provided implies functionality that is not implemented</Message>
</Error>

2025-07-26T12:11:54-05:00: TASK ERROR: access time safety check failed: failed to upload chunk to s3 backend: chunk upload failed: unexpected status code 501 Not Implemented

From what I understand, the datastore name is used as a prefix for objects. I checked the bucket after the above failure, and do see it was able to create the backblaze-b2/.in-use object. I also did an endpoint test with the backblaze-b2 prefix, and it successfully created the backblaze-b2/.s3-client-test object.

Bash:
root@pbs3:~# proxmox-backup-manager s3 check backblaze-b2 [bucket-name] --store-prefix /backblaze-b2

The access time safety check failure and accompanying 501 Not Implemented error returned by B2 seems to indicate PBS may be dependent on an S3 feature that is not implemented in B2. I skimmed the B2 docs, and didn't see anything in their list of unsupported features that might be related to the above error. Hoping to get some insight on what might be the problem.

Cheers
Thanks for the report and providing the already informative output. Will investigate how to correctly support the backblaze S3 api as well.
 
Congratulations on the beta release! The new S3-backed storage is really exciting. I think this is the first officially supported network-based storage type, correct? I know it's been possible to configure iSCSI-, SMB-, and NFS-based storages via the command line and use those, but they're not officially supported.

I did have a question about storage performance on S3. I've got a TrueNAS server with a 4x mirror-based HDD ZFS pool (4 vdevs with 2x7200RPM HDDs each) for mass storage, and i'd like to use that as a target for backups. I'd previously planned to set that up with NFS or iSCSI.

In general, is there an expected performance advantage to using the S3-based storage type versus NFS or iSCSI?
 
  • Like
Reactions: Johannes S
Congratulations on the beta release! The new S3-backed storage is really exciting. I think this is the first officially supported network-based storage type, correct? I know it's been possible to configure iSCSI-, SMB-, and NFS-based storages via the command line and use those, but they're not officially supported.
Yes, although network attached storages can already be used as datastore backends since the beginning, this is just not managed by PBS and one has to be aware of the limitations.

I did have a question about storage performance on S3. I've got a TrueNAS server with a 4x mirror-based HDD ZFS pool (4 vdevs with 2x7200RPM HDDs each) for mass storage, and i'd like to use that as a target for backups. I'd previously planned to set that up with NFS or iSCSI.

In general, is there an expected performance advantage to using the S3-based storage type versus NFS or iSCSI?
No, in general your S3 object store will be outperformed by these solutions. The performance you can expect will greatly depend on your S3 object store provider.
 
Having an issue getting Backblaze B2 setup as an S3 Datastore.

After creating the S3 Endpoint, the following test appears to be successful (I see the .s3-client-test file in the root of the bucket):

Bash:
root@pbs3:~# proxmox-backup-manager s3 check backblaze-b2 [bucket-name] --store-prefix /

But when adding the S3 Datastore, I get the following error:

Code:
2025-07-26T12:11:51-05:00: Chunkstore create: 1%
2025-07-26T12:11:51-05:00: Chunkstore create: 2%
2025-07-26T12:11:51-05:00: Chunkstore create: 3%
[...snip...]
2025-07-26T12:11:53-05:00: Chunkstore create: 97%
2025-07-26T12:11:54-05:00: Chunkstore create: 98%
2025-07-26T12:11:54-05:00: Chunkstore create: 99%
2025-07-26T12:11:54-05:00: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
    <Code>NotImplemented</Code>
    <Message>A header you provided implies functionality that is not implemented</Message>
</Error>

2025-07-26T12:11:54-05:00: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
    <Code>NotImplemented</Code>
    <Message>A header you provided implies functionality that is not implemented</Message>
</Error>

2025-07-26T12:11:54-05:00: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
    <Code>NotImplemented</Code>
    <Message>A header you provided implies functionality that is not implemented</Message>
</Error>

2025-07-26T12:11:54-05:00: TASK ERROR: access time safety check failed: failed to upload chunk to s3 backend: chunk upload failed: unexpected status code 501 Not Implemented

From what I understand, the datastore name is used as a prefix for objects. I checked the bucket after the above failure, and do see it was able to create the backblaze-b2/.in-use object. I also did an endpoint test with the backblaze-b2 prefix, and it successfully created the backblaze-b2/.s3-client-test object.

Bash:
root@pbs3:~# proxmox-backup-manager s3 check backblaze-b2 [bucket-name] --store-prefix /backblaze-b2

The access time safety check failure and accompanying 501 Not Implemented error returned by B2 seems to indicate PBS may be dependent on an S3 feature that is not implemented in B2. I skimmed the B2 docs, and didn't see anything in their list of unsupported features that might be related to the above error. Hoping to get some insight on what might be the problem.

Cheers
Status update on this one, I was able to reproduce and identify the issue. The Backblaze B2 implementation of the S3 api seems to not support the If-None-Match http header, therefore the requests failing with above error. Patches to fix this have been implemented [0], with these I was able to operate just fine on a Backblaze B2 bucket, so this will be fixed in an upcoming version.

Nevertheless, may I ask you to also report this issue to Backblaze, asking for supporting the If-None-Match header in put object requests as well. The more interest the see, the more likely this will be implemented and improve S3 api compatibility.

[0] https://lore.proxmox.com/pbs-devel/20250728100154.556438-1-c.ebner@proxmox.com/T/
 
@Chris @ness1602 :

Thanks for the info on the relative performance of S3/SMB/NFS.

Is in-GUI support for NFS/SMB storage targets on the roadmap?
Also, you mentioned "limitations" for NAS-backed targets. Is there an existing article/thread I could read on that? I do expect a performance hit versus writing to a local ZFS pool on the PBS server.
 
Status update on this one, I was able to reproduce and identify the issue. The Backblaze B2 implementation of the S3 api seems to not support the If-None-Match http header, therefore the requests failing with above error. Patches to fix this have been implemented [0], with these I was able to operate just fine on a Backblaze B2 bucket, so this will be fixed in an upcoming version.

Nevertheless, may I ask you to also report this issue to Backblaze, asking for supporting the If-None-Match header in put object requests as well. The more interest the see, the more likely this will be implemented and improve S3 api compatibility.

[0] https://lore.proxmox.com/pbs-devel/20250728100154.556438-1-c.ebner@proxmox.com/T/
What are the implications of omitting the `if-none-match` header. If I'm understanding its purpose correctly, it would re-upload the object regardless if the local and remote objects are identical?
 
Is in-GUI support for NFS/SMB storage targets on the roadmap?
Also, you mentioned "limitations" for NAS-backed targets. Is there an existing article/thread I could read on that? I do expect a performance hit versus writing to a local ZFS pool on the PBS server.
No, for local datastores the recommended setup is still as described in the docs, see https://pbs.proxmox.com/docs/installation.html#recommended-server-system-requirements

With limitations I was referring to the additional points of failure (additional hardware, network, ecc. all need to be working to restore your backups) increased latency and typically worse performance.
 
  • Like
Reactions: Johannes S
What are the implications of omitting the `if-none-match` header. If I'm understanding its purpose correctly, it would re-upload the object regardless if the local and remote objects are identical?
Yes, the If-None-Match header is used to skip re-upload of pre-existing objects with the same object key. So this can bring some performance improvements. For details see https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html#API_PutObject_RequestParameters
 
No, for local datastores the recommended setup is still as described in the docs, see https://pbs.proxmox.com/docs/installation.html#recommended-server-system-requirements

With limitations I was referring to the additional points of failure (additional hardware, network, ecc. all need to be working to restore your backups) increased latency and typically worse performance.
Thanks for the clarification. :)

My PBS server has a local SATA SSD mirror for local backup storage that performs well enough for my current needs. I'm trying to figure out the best way to duplicate that data onto my NAS so I have a second copy of the backup.

I think I'm going to start with iSCSI, but also experiment with NFS, too.
I'm sure one is more performant than the the other, but I can't do more than 2.5 Gbps on the PBS server, so I'm not sure it matters.
 
  • Like
Reactions: Johannes S
I successfully installed the beta today and mounted an S3 bucket from Wasabi.
Unfortunately, the storage size is only displayed at 398.11 GB, which is the same size as the internal storage of the second datastore.
Hi,
thanks for the feedback. Yes, currently the datastore summary only shows statistics related to the local datastore cache. A dedicated dashboard for the s3 backend is planned and tracked at https://bugzilla.proxmox.com/show_bug.cgi?id=6563, but not implemented yet. For the meantime I will see to extend the current summary to make this more clear, thanks!