PBS fails to connect to Backblaze B2?

mr-woodapple

New Member
Aug 24, 2025
2
0
1
Hi there,

I'm hitting an error that I don't know how to solve - let me explain. I'm on PBS 4.0.14 btw.

I'm trying to connect my Backblaze B2 storage (https://www.backblaze.com/cloud-storage) to the new S3 option in PBS. I managed to setup the S3 endpoint, I then moved on to create a datastore. Selecting the S3 endpoint automatically also selects the correct bucket - so PBS is talking to Backblaze to retrieve the available buckets (I think).

On clicking "Add", I'm only getting this error: "failed to access bucket: unexpected status code 405 Method Not Allowed (400)".
To debug this, I installed the aws cli locally, set the credentials and tried accessing the data in the bucket with this command:
Code:
aws --endpoint-url https://s3.eu-central-003.backblazeb2.com --region eu-central-003 s3 ls s3://<bucket-name>
That works, it's showing the image I added there - the credentials are the same ones I added in PBS, so I don't think that's it.

Is there anything I'm doing obviously wrong? And how can I debug this further / get it fixed?
 
Last edited:
Found the solution to the problem thanks to this reddit-Post.

When setting up the S3 endpoint for B2 Backblaze, you need to set the checkbox at „Path Style“ and additionally select „Skip If-None-Match header“ under „Provider Quirks“.
 
Funny enough, I was just trying this with an Amazon S3 with the same issue. Until I changed those settings in PBS S3 Endpoint. Didn't try it earlier because I figured it's Amazon S3 "I won't need to do that." Oh well.
 
Funny enough, I was just trying this with an Amazon S3 with the same issue. Until I changed those settings in PBS S3 Endpoint. Didn't try it earlier because I figured it's Amazon S3 "I won't need to do that." Oh well.
AWS S3 does not require setting the Skip If-None-Match header, that is obviously fully supported.

Also, you only need to set the path style bucket addressing flag if you set up the endpoint without the bucket name template as shown in the examples, see https://pbs.proxmox.com/docs/storage.html#s3-datastore-backend-configuration-examples. AWS supports both, vhost style and path style bucket addressing, see https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html
 
AWS S3 does not require setting the Skip If-None-Match header, that is obviously fully supported.

Also, you only need to set the path style bucket addressing flag if you set up the endpoint without the bucket name template as shown in the examples, see https://pbs.proxmox.com/docs/storage.html#s3-datastore-backend-configuration-examples. AWS supports both, vhost style and path style bucket addressing, see https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html

Confirmed that I'm still able to create a datastore when endpoint has "Skip If-None Match header" REMOVED, and still using the Path option.

Also, your comment about vhost style and bucket name "template", and the examples made me realize that I was entering "test.s3.us-east-1.amazonaws.com" into the S3 endpoint instead of entering "{{bucket}}.s3.{{region}}.amazonaws.com" into the Enpoint. I mistakenly thought the eg. in the GUI (shown below) was just showing the expected format. Didn't realize that it was expecting a template with braces as shown.
1760453369492.png

All clear, thanks for the help!
BR,
-- Glenn