[SOLVED] Hetzner s3 object store

exma

Active Member
Feb 3, 2020
2
0
41
55
Moin,

I have a Hetzner Cloud VM with pbs and a Hetzner s3 object store.
Everything seems to be working at the moment.
I can see backups in pbs and files in the s3 storage.

However, when I run the garbage collect job, it fails with the following message:
Garbage collection failed: failed to list chunk in s3 object store: failed to parse response body: custom: missing field `Name`

Did I configure it incorrectly, or is there an error?

Thanks!
Nico
 
Hi,
is this issue inter-mitten or reproducible every time? Did this work in the past? Another user reported the same issue starting on September 2nd, but it remains to be seen if the S3 object store provider is also Hetzner. https://forum.proxmox.com/threads/pbs4-s3-prune.171926/

Edit: What region is your S3 object store located in? Falkenstein, DE (FSN1), Helsinki, FI (HEL1), and Nuremberg, DE (NBG1)?
 
Last edited:
  • Like
Reactions: exma
Thanks for your reply.

It is a new installation in fsn1, so it never worked.
The issue occur every time.

Can Proxmox fix it or should I contact Hetzner?
 
We could not reproduce the issue on our side using a S3 bucket hosted on Hetzner in Nürenberg (region nbg1). If you could also reach out to hetzner support would be great. In particular, you might mention that the ListObjectsV2 call seems to not include the bucket name given by the field Name as listed in https://docs.aws.amazon.com/AmazonS...ctsV2.html#API_ListObjectsV2_ResponseElements and that this seems to work for nbg1.

Independently we can see what to do on our end to fix this, thanks!
 
  • Like
Reactions: exma
Hi @exma, I had the same issue. But ChatGPT solved it for me :)


My S3 Endpoint in PBS looked like this:
Code:
Endpoint: nbg1.your-objectstorage.com
Backup worked fine, but Garbage Collection didn't. With the same error you described.

I changed it to:
Code:
Endpoint: yourbucketname.nbg1.your-objectstorage.com

And now everything works fine!
 
  • Like
Reactions: exma
I changed it to:
Endpoint: yourbucketname.nbg1.your-objectstorage.com
Note, I recommend to use variable templating here instead.

The bucket name is part of the datastore configuration, not the endpoint configuration. So given the values you showed, the endopoint url should be literally {{bucket}}.{{region}}.your-objectstorage.com (not inserting any of the values here). The s3 client will then use this url template and fill in the region as set in the region field of the endpoint create/edit window and the bucket (for requests requiring a bucket) according to the one configured in the datastore create/edit window.
 
  • Like
Reactions: flexusjan