Use S3 on OVH

andmattia

Member
Feb 13, 2024
37
3
8
Hi

I try to use S3 on ovh cloud but i found 2 issue:
- region len (min 3) but ovh has some regions only 2 letter
- url ovh not use bucket on name

I make some test and i see that when I try to connect create a file name-parittion/.inuse but I get
```
Aug 22 15:04:26 pbs proxmox-backup-api[649]: TASK ERROR: access time safety check failed: failed to upload chunk to s3 backend: chunk upload failed: unexpected status code 501 Not Implemented
```
 
Hi,
- region len (min 3) but ovh has some regions only 2 letter
indeed, I could find at least de and uk, will send a patch to adapt the limit, thanks
- url ovh not use bucket on name
you can configure the S3 endpoint in PBS to use path style bucket addressing, if vhost style addressing is not available/desired. This can already be set in the endpoint create/edit window.

I make some test and i see that when I try to connect create a file name-parittion/.inuse but I get
```
Aug 22 15:04:26 pbs proxmox-backup-api[649]: TASK ERROR: access time safety check failed: failed to upload chunk to s3 backend: chunk upload failed: unexpected status code 501 Not Implemented
```
Please make sure to set the Skip If-None-Match header in the provider quirks, which can be found in the advanced option of the endpoint create/edit window.
 
You’ve hit two OVH‑specific quirks that Proxmox Backup Server’s S3 client can handle, plus one hard mismatch that explains the 501.
First, the 501 Not Implemented during “access time safety check” happens because PBS writes a tiny marker object (that name‑partition/.inuse you saw) using a conditional PUT with the If‑None‑Match header. OVH’s Swift‑based S3 layer does not implement that conditional header in all regions, so the server answers 501. PBS 4 added a provider quirk for exactly this: tell PBS to skip that header for this endpoint. In the UI, edit the S3 endpoint and enable the advanced option “Skip If‑None‑Match header”. Via CLI, recreate or update your endpoint with the quirk:

proxmox-backup-manager s3 endpoint create ovh-s3 \
--endpoint s3.gra.io.cloud.ovh.net \
--region gra \
--access-key 'AKIAXXXX' \
--secret-key 'xxxxx' \
--provider-quirks skip-if-none-match-header

After that, run a quick sanity check:

proxmox-backup-manager s3 check ovh-s3 your-bucket --store-prefix pbs
This quirk maps 1:1 to the documented flag and resolves the 501 on OVH (and other partial S3 providers).

Second, about URLs and bucket style. PBS supports both virtual‑hosted style and path style. OVH recommends and supports virtual‑hosted style, and PBS prefers that too; only use “path style” if you must. In PBS, leave “Path style” off unless you run into DNS or certificate issues with dotted bucket names. The official PBS docs call this out, and the manpage exposes the --path-style toggle if you’re using CLI.
Third, regions. OVH’s current S3 endpoint pattern is https://s3.<region>.io.cloud.ovh.net and the region string must match OVH’s region naming, for example gra, sbg, waw, lon, lim, or the longer US ones like us-east-va. Those meet PBS’s minimum length requirement. If you tried to use a two‑letter alias like de, PBS’s UI rejects it because it enforces a minimum of 3 characters for “Region”, and even if it didn’t, request signing can go wrong with the wrong region string. The most reliable path is to use your bucket’s actual region code from the OVH manager and the matching endpoint, for example s3.gra.io.cloud.ovh.net with region gra. OVH’s own guides document the endpoint pattern and region names, and Proxmox staff have validated similar S3 setups using the region code in the vhost endpoint without path style.
Putting this together in PBS with the GUI. Create the S3 endpoint under Configuration → Remotes → S3 Endpoints as follows. Set Endpoint to s3.<your-region>.io.cloud.ovh.net. Set Region to the same region code, all lowercase, with at least 3 characters (for example gra). Keep Path Style off. Expand Advanced and enable “Skip If‑None‑Match header”. Save, then add your datastore pointing at that endpoint, your bucket name, and a store prefix such as your datastore name. The storage guide and CLI manpage show the same knobs if you prefer commands.
Why your exact error occurs. During the datastore open, PBS writes an .inuse marker using a conditional PUT. OVH returns 501 to that header combination, which PBS reports as “access time safety check failed”. Enabling the quirk moves PBS to an unconditional create, which OVH accepts. A recent PBS forum thread confirms this precise fix for 501 on S3‑compatible providers, and another thread analyzes the If‑None‑Match behavior specifically on OVH Object Storage.
If you still see errors after setting the quirk, double‑check the two common gotchas. Ensure the endpoint and region match the bucket’s region as per OVH, and avoid path style unless required by your DNS/cert situation. Both mismatches can manifest as signing or 400/501 errors.

If you want, paste the redacted output of

proxmox-backup-manager s3 endpoint list and proxmox-backup-manager s3 check <endpoint-id> <bucket> --store-prefix <prefix>
and I’ll sanity‑check the exact values.
 
You’ve hit two OVH‑specific quirks that Proxmox Backup Server’s S3 client can handle, plus one hard mismatch that explains the 501.
First, the 501 Not Implemented during “access time safety check” happens because PBS writes a tiny marker object (that name‑partition/.inuse you saw) using a conditional PUT with the If‑None‑Match header. OVH’s Swift‑based S3 layer does not implement that conditional header in all regions, so the server answers 501. PBS 4 added a provider quirk for exactly this: tell PBS to skip that header for this endpoint. In the UI, edit the S3 endpoint and enable the advanced option “Skip If‑None‑Match header”. Via CLI, recreate or update your endpoint with the quirk:



After that, run a quick sanity check:


This quirk maps 1:1 to the documented flag and resolves the 501 on OVH (and other partial S3 providers).

Second, about URLs and bucket style. PBS supports both virtual‑hosted style and path style. OVH recommends and supports virtual‑hosted style, and PBS prefers that too; only use “path style” if you must. In PBS, leave “Path style” off unless you run into DNS or certificate issues with dotted bucket names. The official PBS docs call this out, and the manpage exposes the --path-style toggle if you’re using CLI.
Third, regions. OVH’s current S3 endpoint pattern is https://s3.<region>.io.cloud.ovh.net and the region string must match OVH’s region naming, for example gra, sbg, waw, lon, lim, or the longer US ones like us-east-va. Those meet PBS’s minimum length requirement. If you tried to use a two‑letter alias like de, PBS’s UI rejects it because it enforces a minimum of 3 characters for “Region”, and even if it didn’t, request signing can go wrong with the wrong region string. The most reliable path is to use your bucket’s actual region code from the OVH manager and the matching endpoint, for example s3.gra.io.cloud.ovh.net with region gra. OVH’s own guides document the endpoint pattern and region names, and Proxmox staff have validated similar S3 setups using the region code in the vhost endpoint without path style.
Putting this together in PBS with the GUI. Create the S3 endpoint under Configuration → Remotes → S3 Endpoints as follows. Set Endpoint to s3.<your-region>.io.cloud.ovh.net. Set Region to the same region code, all lowercase, with at least 3 characters (for example gra). Keep Path Style off. Expand Advanced and enable “Skip If‑None‑Match header”. Save, then add your datastore pointing at that endpoint, your bucket name, and a store prefix such as your datastore name. The storage guide and CLI manpage show the same knobs if you prefer commands.
Why your exact error occurs. During the datastore open, PBS writes an .inuse marker using a conditional PUT. OVH returns 501 to that header combination, which PBS reports as “access time safety check failed”. Enabling the quirk moves PBS to an unconditional create, which OVH accepts. A recent PBS forum thread confirms this precise fix for 501 on S3‑compatible providers, and another thread analyzes the If‑None‑Match behavior specifically on OVH Object Storage.
If you still see errors after setting the quirk, double‑check the two common gotchas. Ensure the endpoint and region match the bucket’s region as per OVH, and avoid path style unless required by your DNS/cert situation. Both mismatches can manifest as signing or 400/501 errors.

If you want, paste the redacted output of


and I’ll sanity‑check the exact values.
Please do not post AI generate output here. This is of little help as if one wants to research/debug using AI, one can do that on their own. Further, as in this concrete example with the region input validation, the AI might never have been exposed to the bug so produces nonsense output.
 
Hi,

indeed, I could find at least de and uk, will send a patch to adapt the limit, thanks

you can configure the S3 endpoint in PBS to use path style bucket addressing, if vhost style addressing is not available/desired. This can already be set in the endpoint create/edit window.


Please make sure to set the Skip If-None-Match header in the provider quirks, which can be found in the advanced option of the endpoint create/edit window.
Thanks Chris

it works fine!

I see a possibile issue relate to "available space". If i look in my datastore I see 30,48 GB available is it normal?
1755877227359.png

at the moment I use zone with 3 letters and wait a fix to use DE or UK
 
I try to share the same bucket from 2 nodes PBS is it a good option?
No, only one PBS instance must be used to access the same datastore. It is possible to create multiple datastores within the same bucket, but do not use different PBS instances to access them.
 
No, only one PBS instance must be used to access the same datastore. It is possible to create multiple datastores within the same bucket, but do not use different PBS instances to access them.
ok but in case my PBS server is broken and I need to restore data from a differnt PBS server I can connect to S3 bucket, add datastore PBS to proxmoxEV and restore it?
 
Yes. It is possible to create the same datastore with existing data in a bucket again, just use the reuse existing datastore and overwrite in use marker flags in the datastore create window for this. The datastore has to use the same name for this to work.
 
Yes. It is possible to create the same datastore with existing data in a bucket again, just use the reuse existing datastore and overwrite in use marker flags in the datastore create window for this. The datastore has to use the same name for this to work.
I try this and I'm able to reconnect to other instance of PBS. After that I remove because I've some issue related to verify backup but it's not related