PBS4 problems with MEGA as an S3 Endpoint

Jorge Teixeira

Renowned Member
Oct 30, 2016
47
13
73
55
Hi.
With MEGA as an S3 Endpoint, PBS has a strange behaviour.
This is the content of my cache HD:
img1.jpg

This is the content saved in MEGA:

img2.jpg

For one single backup, there is more than one folder on MEGA, but just 1 on cache HD. If i try to prune or delete backups it give me errors or if i try to refresh content from bucket.
Errors Messages:
"unexpected status code 501 Not Implemented (400)."
"A header you provided implies functionality that is not implemented."
 
For one single backup, there is more than one folder on MEGA, but just 1 on cache HD. If i try to prune or delete backups it give me errors or if i try to refresh content from bucket.
Do you have versioning or object locking enabled for this bucket?

"unexpected status code 501 Not Implemented (400)."
"A header you provided implies functionality that is not implemented."
Please check if this is fixed by setting the Skip If-None-Match header in the advanced options of the s3 endpoint create/edit window.
 
Do you have versioning or object locking enabled for this bucket?


Please check if this is fixed by setting the Skip If-None-Match header in the advanced options of the s3 endpoint create/edit window.
Hi Chris.
I do not have versioning or object locking enabled and already enabled the Skip If-None-Match header but still the same.
 
What contents do you have within each of the duplicate folders (key prefixes)? Are the contents duplicated as well?
 
From what i can see, it creates a separate folder for each stored file.
Than this is most likely an issue or configuration option on how your provider shows the objects. Note that s3 does not know about folders as such, there are only objects with object keys. The object keys prefixes might be interpreted as folders.
 
Than this is most likely an issue or configuration option on how your provider shows the objects. Note that s3 does not know about folders as such, there are only objects with object keys. The object keys prefixes might be interpreted as folders.
The main problem is that if i try to prune or delete manualy a backup it gives errors and the old backups are not removed.
 
Please try to run a proxmox-backup-manager s3 check <s3-endpoint-id> <bucket> and provide the full output, if there is any.
 
Than this is most likely an issue or configuration option on how your provider shows the objects. Note that s3 does not know about folders as such, there are only objects with object keys. The object keys prefixes might be interpreted as folders.

Please try to run a proxmox-backup-manager s3 check <s3-endpoint-id> <bucket> and provide the full output, if there is any.
There is no output from the command.
1755525011968.png
 
Okay, I see: Please open an issue for this at bugzilla.proxmox.com, linking to this thread so this is tracked properly, thanks!

Most likely a header for the list object v2 api call not being accepted by mega's api implementation, given that this shows up on prune. I suspect the same error will show up on garbage collection as well, as that performs calls to the same api endpoints.
 
Last edited: