Bad Request (400) failed to list buckets on s3 endpoint (PBS4.0.14)

andrei1015

New Member
Apr 26, 2025
3
0
1
Hello! (also sorry, pretty new here and proxmox in general)

Have been trying to set up an S3 endpoint to check the new functionality since the Beta, was never able to make it work. The access token I am using has full control over aws s3, but when trying to create a datastore using an s3 endpoint I created, I just get the error from the title.
1755097723394.png

none of these works. Have even tried an admin account...
 
Last edited:
I'm getting this same problem. But as I described in my post (https://forum.proxmox.com/threads/is-there-a-way-to-debbug-s3-post-request-to-endpoint.169640/), when I list the buckets using AWS CLI and quickly (almost simultaneous) list using proxmox, it works for a brief moment.

Could you test to see if our problem is related?
Just install AWS Cli (can be on other computer), set it up with your api key (same used by pbs) and list the buckets using aws cli. While you're listing with aws cli, try to check on pbs by running
Code:
proxmox-backup-manager s3 endpoint list-buckets <my-endpoint-id>

Also, check if your endpoint is correct. According to the pbs doc, for aws should be {{bucket}}.s3.{{region}}.amazonaws.com
 
Last edited:
  • Like
Reactions: andrei1015
I tried but no, still didn't work. I also checked the endpoint and I followed the doc.
My aws cli lists just fine, the proxmox-backup-manager command doesn't.
 
I'm having the same issue with a backblaze b2 datastore. It was working before for about a week, now I only get 400 errors.
 
Last edited:
Same issue here, would love to get my PBS backed up to Backblaze. This is a private, immutable, encrypted bucket. The application key has R/W access to the bucket. I also tried the master key, but same result. PBS v4.0.14

Output of the proxmox list-buckets command:
Code:
root@pbs:~# proxmox-backup-manager s3 endpoint list-buckets Backblaze
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
    <Code>SignatureDoesNotMatch</Code>
    <Message>Signature validation failed</Message>
</Error>

Error: failed to list buckets

Caused by:
    unexpected status code 403 Forbidden

Here's some screenshots from PBS UI:
1755389336469.png
1755389389725.png
 
Last edited:
Same issue here, would love to get my PBS backed up to Backblaze. This is a private, immutable, encrypted bucket. The application key has R/W access to the bucket. I also tried the master key, but same result. PBS v4.0.14

Output of the proxmox list-buckets command:
Code:
root@pbs:~# proxmox-backup-manager s3 endpoint list-buckets Backblaze
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
    <Code>SignatureDoesNotMatch</Code>
    <Message>Signature validation failed</Message>
</Error>

Error: failed to list buckets

Caused by:
    unexpected status code 403 Forbidden

Here's some screenshots from PBS UI:
View attachment 89531
View attachment 89532
I have BackBlaze S3 "mostly" working. I can list buckets, do backups, but a decent percentage of the verification jobs fail though.
 
Hi,
Hello! (also sorry, pretty new here and proxmox in general)

Have been trying to set up an S3 endpoint to check the new functionality since the Beta, was never able to make it work. The access token I am using has full control over aws s3, but when trying to create a datastore using an s3 endpoint I created, I just get the error from the title.
View attachment 89418

none of these works. Have even tried an admin account...
why are you using port 80? The PBS s3 client does not support plain http, https is required for communication. Also, please use the template patterns as described in the docs https://pbs.proxmox.com/docs/storage.html#s3-datastore-backend-configuration-examples The bucket is defined on the datastore, so not directly part of the endpoint configuration, and the region is part of the request signature, therefore best configured by setting it in the corresponding field, which will be used in the endpoint url template.

Also, using path style addressing is not supported by all AWS regions. There was even efforts to get rid of it completely by AWS, but kept for older regions for compatibility. So for AWS S3 the use of the default vhost style bucket addressing is recommended.
 
I have BackBlaze S3 "mostly" working. I can list buckets, do backups, but a decent percentage of the verification jobs fail though.
The signature verification error indicates that the region, which is part of the request signature is not set correctly. The following endpoint configuration was used on my end for testing
Code:
s3-endpoint: backblaze-s3
    access-key XXXXXXXXXXXXXXXXXXXXXXXXX
    endpoint {{bucket}}.s3.{{region}}.backblazeb2.com
    provider-quirks skip-if-none-match-header
    region eu-central-003
    secret-key XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Please note that the backblaze api does not support conditional uploads, so the provider quirks must be set accordingly.
 
I wrote a blog post on how I configured Backblaze B2, and got it working as a datastore. However, backup verification jobs do sometimes fail. Yes this is just a tech preview, and yes it seems buggy. No, I'm not relying it on until it comes out of tech preview and verification jobs no longer fail.

How To: Backblaze B2 as a Proxmox Backup Server 4.0 Datastore

The config you showed in your blog post will not work with backblaze, as the region and provider quirks need to be set accordingly. Please try using the config as shown above
 
I'm having the same issue with a backblaze b2 datastore. It was working before for about a week, now I only get 400 errors.
Are you sure you are not running into request/storage limits by your providers free tier?
 
The config you showed in your blog post will not work with backblaze, as the region and provider quirks need to be set accordingly. Please try using the config as shown above
Ok I modified the settings and running a backup now. The original configuration was working just fine, except for occasional verification failures. I would suggest from an end user usability perspective, that the unusual Endpoint formatting requirement is confusing. My other B2 clients (Synology, Portainer, etc.) only need the B2 bucket FQDN + Access Key + Secret Key. Requiring '{{bucket}}.s3.{{region}}.backblazeb2.com' is too confusing and error prone. I would suggest looking at a more user friendly method of defining the S3 endpoint.
 
The signature verification error indicates that the region, which is part of the request signature is not set correctly. The following endpoint configuration was used on my end for testing
Code:
s3-endpoint: backblaze-s3
    access-key XXXXXXXXXXXXXXXXXXXXXXXXX
    endpoint {{bucket}}.s3.{{region}}.backblazeb2.com
    provider-quirks skip-if-none-match-header
    region eu-central-003
    secret-key XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Please note that the backblaze api does not support conditional uploads, so the provider quirks must be set accordingly.

This isn't working for me. I had a working datastore before with an albeit incorrect config (was missing {{bucket}} in the endpoint & had "path style" checked) and it was working for a few days, but stopped. I've since removed the datastore, tried with both my incorrect setup, the blog posts' setup, and what's shown here, and I can't even get it to list the buckets on the "Add Datastore" screen.
 
Last edited:
Hi,

why are you using port 80? The PBS s3 client does not support plain http, https is required for communication. Also, please use the template patterns as described in the docs https://pbs.proxmox.com/docs/storage.html#s3-datastore-backend-configuration-examples The bucket is defined on the datastore, so not directly part of the endpoint configuration, and the region is part of the request signature, therefore best configured by setting it in the corresponding field, which will be used in the endpoint url template.

Also, using path style addressing is not supported by all AWS regions. There was even efforts to get rid of it completely by AWS, but kept for older regions for compatibility. So for AWS S3 the use of the default vhost style bucket addressing is recommended.
Hello!

Port 80 is there because I had already tried with 443 getting the same result. I did follow the documentation and everything should be fine! The path addressing has been both on and off with both ports I have tried. Nothing works.
Account has AmazonS3FullAccess permissions.
I have made the endpoint contain the full name of the bucket and the region.
The error is a cors error.

Thank you for the reply btw! Super excited to start offloading some backups!

LATER EDIT: I was able to add a datastore! the main issue I was facing was the bucket name I was using contained dots and my domain name. I was able to use the cli to add the datastore, the UI returned no buckets.
 
Last edited: