Proxmox Backup Server 4.0 BETA released!

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
6,611
4,103
315
South Tyrol/Italy
shop.proxmox.com
We are pleased to announce the first beta release of Proxmox Backup Server 4.0! The 4.x family is based on the great Debian 13 "Trixie" and comes with a 6.14.8 kernel and OpenZFS 2.3.3.

Note: The current release of Proxmox Backup Server 4.0 is a beta version.

Here are some of the highlights of the Proxmox Backup Server 4.0 beta version
  • S3-compatible object stores as backup storage backend (technology preview).
  • ZFS 2.3 with RAID-Z expansion.
  • Automatically trigger sync jobs when a removable datastore is mounted.
  • and more...

Release notes
https://pbs.proxmox.com/wiki/index.php/Roadmap

Download
https://enterprise.proxmox.com/iso

Community Forum
https://forum.proxmox.com

Bugtracker
https://bugzilla.proxmox.com

Source code
https://git.proxmox.com

FAQ
Q: Is the Proxmox Backup Server 4.0 beta safe to use?
A: You should never upgrade critical production systems to the beta, it's not a stable release and it does not get security support.
While we try hard to avoid severe bugs, some instability is expected.

Q: Can I upgrade from the latest Proxmox Backup Server 3.4 to the 4.0 beta with apt?
A: Yes, please follow the upgrade instructions at https://pbs.proxmox.com/wiki/index.php/Upgrade_from_3_to_4

Q: Can I upgrade a 4.0 beta installation to the stable 4.0, once released, via apt?
A: Yes, upgrading from beta to stable installation will be possible via apt.

Q: Which apt repository can I use for Proxmox Backup Server 4.0 beta?
A: You can use the pbs-test repository:

Code:
deb http://download.proxmox.com/debian/pbs trixie pbs-test

Q: Will the beta receive updates and new features?
A: Yes, the beta will continuously receive updates including the latest fixes and features.

Q: When do you expect the stable Proxmox Backup Server 4.0 release?
A: The final Proxmox Backup Server 4.0 will be available as soon as all Proxmox Backup Server 4.0 release critical bugs are fixed, and new features are deemed stable.

Q: Where can I get more information about feature updates?
A: Check the roadmap, forum, the mailing list, and/or subscribe to our newsletter.

You are welcome to test your hardware and your upgrade path, and we are looking forward to your feedback, bug reports, or ideas. Thank you for getting involved!
 
I'm trying to add wasabi to test but for now error 400.
Maybe you should add test button on S3 endpoints?
There are some ideas like that floating around and UX can be polished a bit, often it's the region that's missing/wrong, so ensure you set that correctly in the endpoint. Can you please post your configuration (naturally without access/secret key!)?

For quick check you can also use the CLI tool, e.g.:
proxmox-backup-manager s3 check <s3-endpoint-id> <bucket>

And edit the simple config format in /etc/proxmox-backup/s3.cfg directly.
 
Usually i test with mc command , like mc alias:
Code:
wasabi 
  URL       : https://ca-central-1.wasabisys.com
  AccessKey :
  SecretKey :
  API       : s3v4
  Path      : auto
  Src       : /root/.mc/config.json
And this works
 
To add to this, I'm trying with hetzner S3 storage and the region "hel1" is not accepted by any inputs (or manually editing via config).

Code:
root@pbs:~# proxmox-backup-manager s3 check hetzner 6892495451 --store-prefix /
<Error><Code>LocationConstraintConflict</Code><Message>The location constraint differs from the location you are trying to access. To avoid this error, please ensure the region parameter of your client matches the Hetzner location you are trying to access. For details, see https://docs.hetzner.com/cloud/general/locations/#what-locations-are-there</Message><Resource>.s3-client-test</Resource><RequestId>d8e2dd59640960190add1b100c93db1b</RequestId></Error>
Error: put object failed

Caused by:
    invalid request
 
Another test,with ceph s3:
failed to upload in-use marker for datastore: unexpected status code 404 Not Found

Code:
ceph   
  URL       : https://storage.losproxmox.com:8089
  AccessKey :
  SecretKey :
  API       : s3v4
  Path      : auto
  Src       : /root/.mc/config.json
 
  • Like
Reactions: Johannes S
Trying the first sync with my own ceph object storage cluster.
I can confirm that doesn't accept "lt01" as a region (not important now), and I would like to know how much space is needed for the cache partition, because on the datastore summary info I see the cache available space and not S3 available space.

PS: very very thanks to Proxmox staff, this is a very awaited feature
 
Last edited:
  • Like
Reactions: Johannes S
Trying the first sync with my own ceph object storage cluster.
I can confirm that doesn't accept "lt01" as a region (not important now), and I would like to know how much space is needed for the cache partition, because on the datastore summary info I see the cache available space and not S3 available space.

PS: very very thanks to Proxmox staff, this is a very awaited feature
[pbs-devel] applied: [PATCH] api types: make region regex less strict
:)
 
  • Like
Reactions: Johannes S
To add to this, I'm trying with hetzner S3 storage and the region "hel1" is not accepted by any inputs (or manually editing via config).

Code:
root@pbs:~# proxmox-backup-manager s3 check hetzner 6892495451 --store-prefix /
<Error><Code>LocationConstraintConflict</Code><Message>The location constraint differs from the location you are trying to access. To avoid this error, please ensure the region parameter of your client matches the Hetzner location you are trying to access. For details, see https://docs.hetzner.com/cloud/general/locations/#what-locations-are-there</Message><Resource>.s3-client-test</Resource><RequestId>d8e2dd59640960190add1b100c93db1b</RequestId></Error>
Error: put object failed

Caused by:
    invalid request
I checked this out more closely, and as @f.cuseo already noticed, made the overly strict region regex less strict, with that I could add a hetzner object storage bucket using these settings:

Screenshot 2025-07-25 at 15-08-17 d9 - Proxmox Backup Server.png

I.e., ensure to set the region and if you configure the endpoint that way, then tick the checkbox for path style access (bucket is included in path, not as subdomain).

As the example from the Hetzner docs with curl use both access styles I just tested switching to the subdomain one.
So if you do untick the path style checkbox, you can also use the following endpoint URL in verbatim ({{bucket}} and {{region}} will be automatically replaced): {{bucket}}.{{region}}.your-objectstorage.com
For example:
1753460078867.png

Anyhow, the aforementioned fix is available with proxmox-backup-server version 4.0.7-1, which was just uploaded to the pbs-test repository.
 
Another test,with ceph s3:
failed to upload in-use marker for datastore: unexpected status code 404 Not Found

Code:
ceph
  URL       : https://storage.losproxmox.com:8089
  AccessKey :
  SecretKey :
  API       : s3v4
  Path      : auto
  Src       : /root/.mc/config.json
The documentation shipped alongside your PBS 4 installation has a "Datastores with S3 Backend" section in the "Backup Storage" chapter, there is an example for how one could configure a Ceph RGW backed S3 endpoint, as this was one of the first test S3 targets the main authored used IIRC. The most relevant info from the docs boils down to the config example excerpts:

Code:
# cat /etc/proxmox-backup/s3.cfg
s3-endpoint: ceph-s3-rados-gw
     access-key XXXXXXXXXXXXXXXXXXXX
     endpoint 172.16.0.200
     fingerprint XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX
     path-style true
     port 7480
     secret-key XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

# cat /etc/proxmox-backup/datastore.cfg
datastore: ceph-s3-rgw-store
     backend bucket=pbs-ceph-bucket,client=ceph-s3-rados-gw,type=s3
     path /mnt/datastore/ceph-s3-rgw-store-local-cache

Ensure you pass a Cert fingerprint so that the TLS cert validation can work if you use a self-signed certificate and did not place the CA into your system trust DB.
 
Last edited:
  • Like
Reactions: Johannes S
Hi,
Code:
s3-endpoint: Wasabi

    access-key

    endpoint proxmox.ca-central-1.wasabisys.com

    secret-key
Is proxmox here your bucket name? If so, does this DNS resolve to the endpoint (e.g. check via dig proxmox.ca-central-1.wasabisys.com.

Please try to use path-sytle url if the vhost style addressing does not work, by not providing the bucket name as part of the domain name and checking the corresponding flag in the endpoint edit window. A quick online search shows a similar setup with path style urls for wasabi, see https://docs.wasabi.com/v1/docs/how-do-i-use-qumulo-with-wasabi at point nr.4.

Further, you should specify the region, as the region is part of the AWS version 4 request signing scheme used for authentication of the requests to the S3 api.
 
Last edited:
  • Like
Reactions: Johannes S
Another test,with ceph s3:
failed to upload in-use marker for datastore: unexpected status code 404 Not Found

Code:
ceph  
  URL       : https://storage.losproxmox.com:8089
  AccessKey :
  SecretKey :
  API       : s3v4
  Path      : auto
  Src       : /root/.mc/config.json
Similar to what @t.lamprecht already suggested, please do try to use the proxmox-backup-manager s3 check <endpoint-id> <bucket> --store-prefix test0, (store-prefix can be choosen by yourself, it normally is the datastore name).

If that returns with an error, please share the exact error message so we can help.

From the 404 error I would suspect that the bucket cannot be found.

Also, please share the proxmox related configs as well, not just the mc one so we can see what you input parameters were and try to reproduce your issue, thanks!
 
  • Like
Reactions: Johannes S
Hello,

Just tried with Scaleway S3 ( https://www.scaleway.com/fr/object-storage/ ).
Looks to work perfectly.

If I could change one thing, it would be that the “Region” field should be linked to the Datastore rather than the endpoint, while keeping the region as a variable in the endpoint.
Thanks for letting us know! Also positive feedback is very much appreciated as it helps us understand what has been tried/tested and works best with which providers and configuration settings.
 
  • Like
Reactions: Johannes S
Trying the first sync with my own ceph object storage cluster.
I can confirm that doesn't accept "lt01" as a region (not important now), and I would like to know how much space is needed for the cache partition, because on the datastore summary info I see the cache available space and not S3 available space.

PS: very very thanks to Proxmox staff, this is a very awaited feature
The more you can provide, the better of course, but typically a smaller SSD should be plenty. E.g. used a zfs dataset with about 64GB of usable storage space for my testing.

Edit: Note that you cannot operate an S3 store without cache however, as the implementation does rely on having some temporary space to write data to before persisting it to S3.
 
Last edited:
  • Like
Reactions: Johannes S
OKay finally got it:
s3.cfg
Code:
s3-endpoint: Wasabi
    access-key
    endpoint {{bucket}}.s3.{{region}}.wasabisys.com
    region ca-central-1
    secret-key
datastore.cfg
Code:
datastore: Wasabi
    backend bucket=proxmox,client=Wasabi,type=s3
    comment
    gc-schedule daily
    notification-mode notification-system
    path /cache
 
OKay finally got it:
s3.cfg
Code:
s3-endpoint: Wasabi
    access-key
    endpoint {{bucket}}.s3.{{region}}.wasabisys.com
    region ca-central-1
    secret-key
datastore.cfg
Code:
datastore: Wasabi
    backend bucket=proxmox,client=Wasabi,type=s3
    comment
    gc-schedule daily
    notification-mode notification-system
    path /cache
Great! Yes, as the region is part of the AWS sign v4 scheme used for request authentication, it must be set in order for the requests to be validated by the API provider. Otherwise you might get such a permission error in the response.
 
  • Like
Reactions: Johannes S