Proxmox Backup Server 4.0 BETA released!

just double checked with the just released PBS stable. Datastore creation works for both cases here without issue, when deleting the datastore contents during datastore destruction as well as if checking the reuse datastore and overwrite in-use marker if reusing the previous datastore contents.

Settings for my backblaze bucket look the same as for you. Keep all versions and no object locking.

What version are you running at the moment? Please post the output of proxmox-backup-manager versions --verbose
I can install the just release stable and go from there instead of the beta

Code:
Linux pbs 6.14.8-2-pve #1 SMP PREEMPT_DYNAMIC PMX 6.14.8-2 (2025-07-22T10:04Z) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
root@pbs:~# proxmox-backup-manager versions --verbose
proxmox-backup                     4.0.0        running kernel: 6.14.8-2-pve
proxmox-backup-server              4.0.11-2     running version: 4.0.11    
proxmox-kernel-helper              9.0.3                                  
proxmox-kernel-6.14.8-2-pve-signed 6.14.8-2                                
proxmox-kernel-6.14                6.14.8-2                                
ifupdown2                          3.3.0-1+pmx9                            
libjs-extjs                        7.0.0-5                                
proxmox-backup-docs                4.0.11-2                                
proxmox-backup-client              4.0.11-1                                
proxmox-mail-forward               1.0.2                                  
proxmox-mini-journalreader         1.6                                    
proxmox-offline-mirror-helper      0.7.0                                  
proxmox-widget-toolkit             5.0.5                                  
pve-xtermjs                        5.5.0-2                                
smartmontools                      7.4-pve1                                
zfsutils-linux                     2.3.3-pve1
 
Last edited:
Does proxmox-backup-manager s3 check <s3-endpoint-id> <bucket> work without issues? Meaning it runs trough without output.
 
Does proxmox-backup-manager s3 check <s3-endpoint-id> <bucket> work without issues? Meaning it runs trough without output.
no same sort of error and that is on the brand new empty, used never bucket....

I am looking into this possibly being an issue on the B2 side as well nothing to do with PBS at this point as it might appear to be...

I don't envy the support of S3 while I want it and need it not every vendor as we are seeing is 100% compliant to the S3 standards for sure.

Code:
root@pbs:~# proxmox-backup-manager s3 check BackblazeB2 ovh-pbs
Error: get object failed

Caused by:
    object is archived and inaccessible until restored
 
Last edited:
I can install the just release stable and go from there instead of the beta

Code:
Linux pbs 6.14.8-2-pve #1 SMP PREEMPT_DYNAMIC PMX 6.14.8-2 (2025-07-22T10:04Z) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
root@pbs:~# proxmox-backup-manager versions --verbose
proxmox-backup                     4.0.0        running kernel: 6.14.8-2-pve
proxmox-backup-server              4.0.11-2     running version: 4.0.11  
proxmox-kernel-helper              9.0.3                                
proxmox-kernel-6.14.8-2-pve-signed 6.14.8-2                              
proxmox-kernel-6.14                6.14.8-2                              
ifupdown2                          3.3.0-1+pmx9                          
libjs-extjs                        7.0.0-5                              
proxmox-backup-docs                4.0.11-2                              
proxmox-backup-client              4.0.11-1                              
proxmox-mail-forward               1.0.2                                
proxmox-mini-journalreader         1.6                                  
proxmox-offline-mirror-helper      0.7.0                                
proxmox-widget-toolkit             5.0.5                                
pve-xtermjs                        5.5.0-2                              
smartmontools                      7.4-pve1                              
zfsutils-linux                     2.3.3-pve1
well shite...I think we found the issue...not sure why there are so many class B transactions just today though it's a low use account until this testing started. I will install stable and let it cool down until tomorrow...I bet this is the issue and the error is very erroneous at best .

I can also remove the cap the cost is minimal per 10,000

https://www.backblaze.com/cloud-storage/transaction-pricing

1754491700845.png
 
Last edited:
  • Like
Reactions: Chris
no same sort of error and that is on the brand new empty, used never bucket....

I don't envy the support of S3 while I want it and need it not every vendor as we are seeing is 100% compliant to the S3 standards for sure.

Code:
root@pbs:~# proxmox-backup-manager s3 check BackblazeB2 ovh-pbs
Error: get object failed

Caused by:
    object is archived and inaccessible until restored
Good that points however more towards a permission problem. Does your access key have the permissions to read objects? For my key I currently have
Code:
bypassGovernance, deleteBuckets, deleteFiles, deleteKeys, listBuckets, listFiles, listKeys, readBucketEncryption, readBucketLogging, readBucketNotifications, readBucketReplications, readBucketRetentions, readBuckets, readFileLegalHolds, readFileRetentions, readFiles, shareFiles, writeBucketEncryption, writeBucketLogging, writeBucketNotifications, writeBucketReplications, writeBucketRetentions, writeBuckets, writeFileLegalHolds, writeFileRetentions, writeFiles, writeKeys

The error message might be misleading here, as this is the error for 403 Forbidden status code responses, which indicates an invalid object state for AWS S3 [0], but might not be for backblaze.

[0] https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html#API_GetObject_Errors
 
well shite...I think we found the issue...not sure why there are so many class B transactions just today though it's a low use account until this testing started. I will install stable and let it cool down until tomorrow...I bet this is the issue and the error is very erroneous at best .

View attachment 88968
ah yes, that also does explain the 403 Forbidden response. Will see how to improve the error output for this particular case, thanks for the report!
 
well shite...I think we found the issue...not sure why there are so many class B transactions just today though it's a low use account until this testing started. I will install stable and let it cool down until tomorrow...I bet this is the issue and the error is very erroneous at best .

I can also remove the cap the cost is minimal per 10,000

https://www.backblaze.com/cloud-storage/transaction-pricing

View attachment 88968
This seems to have been....my bad for taking up your valuable time but this is good for other people to be aware of I think...
 
Any one could provide how should the S3 Endpoint config look like for backblaze?
I tried putting s3 endpoint as: s3.us-west-000.backblazeb2.com and backblazeb2.com and setting the region to us-west-000
Also tested s3.{{region}}.backblazeb2.com and all ends up with "405 Method Not Allowed"
I think it should be s3.{{region}}.backblazeb2.com/{{bucket}} but I can not input that due regex checks.
Also when creating an application key in backblaze Access Key should be keyID or keyName? And I assume Secret Key should be applicationKey?
 
Any one could provide how should the S3 Endpoint config look like for backblaze?
I tried putting s3 endpoint as: s3.us-west-000.backblazeb2.com and backblazeb2.com and setting the region to us-west-000
Also tested s3.{{region}}.backblazeb2.com and all ends up with "405 Method Not Allowed"
I think it should be s3.{{region}}.backblazeb2.com/{{bucket}} but I can not input that due regex checks.
Also when creating an application key in backblaze Access Key should be keyID or keyName? And I assume Secret Key should be applicationKey?
I'm using the following (retracted) s3 endpoint config:
Code:
s3-endpoint: backblaze-b2
    access-key XXXXXXXXXXXXXXXXXXXXXXXXX
    endpoint {{bucket}}.s3.{{region}}.backblazeb2.com
    provider-quirks skip-if-none-match-header
    region eu-central-003
    secret-key XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

access-key is the value of what the backblaze interface calls keyID. And yes, it should be an application key. Don't forget to set the provider quirks in the advanced options of the endpoint create/edit window.
 
Thanks! Yeah, now it works.
BTW. I suspect that is currently out of your scope but did you consider adding encrypted datastores? Especially for S3 that would be really helpful to have some way to encrypt a whole store with a user provided password. Doing encrypted backups by itself is not that helpful as has some downsides like loosing the option to explore backup files from ct/hosts in the PBS UI. Currently I am using backrest to sync my PBS datastore to backblaze and it has that option to encrypt it. Makes much easier to trust such remote backups.
 
Thanks! Yeah, now it works.
BTW. I suspect that is currently out of your scope but did you consider adding encrypted datastores? Especially for S3 that would be really helpful to have some way to encrypt a whole store with a user provided password. Doing encrypted backups by itself is not that helpful as has some downsides like loosing the option to explore backup files from ct/hosts in the PBS UI. Currently I am using backrest to sync my PBS datastore to backblaze and it has that option to encrypt it. Makes much easier to trust such remote backups.
No, this is currently not planned, but given that datastore plain text datastore contents can be synced to a potentially less or un-trusted provider, it could make sense to add an additional server side encryption layer. Note however that there are also server side encryption options for buckets. But do open an enhancement request for this at https://bugzilla.proxmox.com linking this post (the thread itself is to crowded to be liked as a whole).
 
1754505761071.png
After sometime running well a sync job to a s3 backblaze datastore it started to error out and doesn't seam to recover. Does it have already some re-connection/retry logic?
My upload speed is fairly slow so such a job takes a lot of time. This one was running already for 3 hours and then stopped working I suspect due some network dropout.
 
thanks! we think we found this issue: https://lore.proxmox.com/pbs-devel/20250806095702.135277-1-s.sterz@proxmox.com/T/#t

the passkey one might be the same issue and the additional log lines are unrelated, or there might be a second different issue there as well..

Hmmm - I still see this with 4.0.11.. The journal logs for the entire login to logged out are:

Code:
Aug 07 06:56:55 pbs unix_chkpwd[829]: password check failed for user (root)
Aug 07 06:56:55 pbs proxmox-backup-api[615]: pam_unix(proxmox-backup-auth:auth): authentication failure; logname= uid=0 euid=0 tty= ruser= rhost=<ip>  user=root
Aug 07 06:56:57 pbs proxmox-backup-api[615]: authentication failure; rhost=[<ip>]:59126 user=root@pam msg=authentication error - AUTH_ERR (7)
Aug 07 06:57:00 pbs proxmox-backup-api[615]: POST /api2/json/access/ticket: 401 Unauthorized: [client [<ip>]:59126] permission check failed.

The IP address is IPv6 if it makes any difference at all.

I do use a passkey managed by 1Password.

Code:
root@pbs:~# proxmox-backup-manager version --verbose
proxmox-backup                     4.0.0        running kernel: 6.14.8-2-pve
proxmox-backup-server              4.0.11-2     running version: 4.0.11    
proxmox-kernel-helper              9.0.3                                   
proxmox-kernel-6.14.8-2-pve-signed 6.14.8-2                                
proxmox-kernel-6.14                6.14.8-2                                
ifupdown2                          3.3.0-1+pmx9                            
libjs-extjs                        7.0.0-5                                 
proxmox-backup-docs                4.0.11-2                                
proxmox-backup-client              4.0.10-1                                
proxmox-mail-forward               1.0.2                                   
proxmox-mini-journalreader         1.6                                     
proxmox-offline-mirror-helper      0.7.0                                   
proxmox-widget-toolkit             5.0.4                                   
pve-xtermjs                        5.5.0-2                                 
smartmontools                      7.4-pve1                                
zfsutils-linux                     2.3.3-pve1

EDIT: Actually, 4.0.11-4 did fix this... Not to be confused with 4.0.11-2 :)
 
Last edited:
the passkey issue was indeed a separate one, but should be fixed with 4.0.11-4 as well :)
 
  • Like
Reactions: CRCinAU
View attachment 88998
After sometime running well a sync job to a s3 backblaze datastore it started to error out and doesn't seam to recover. Does it have already some re-connection/retry logic?
My upload speed is fairly slow so such a job takes a lot of time. This one was running already for 3 hours and then stopped working I suspect due some network dropout.
Are you sure that you are not simply running into the free tier limits? The failed to upload chunk to s3 backend would indicate that upload fails for some reason. Please check the systemd journal for errors, that might give some more context on why the upload failed.
 
Are you sure that you are not simply running into the free tier limits? The failed to upload chunk to s3 backend would indicate that upload fails for some reason. Please check the systemd journal for errors, that might give some more context on why the upload failed.
No I am pretty sure I am not hitting any tier limits, as I do pay for it and after restarting the job it now is at ~18h and still running fine.
I found that in journal:

Code:
Aug 06 18:15:38 pbs proxmox-backup-proxy[213]: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>

                                               <Error>

                                                   <Code>InternalError</Code>

                                                   <Message>An internal error occurred.  Please retry your upload.</Message>

                                               </Error>

Aug 06 18:15:39 pbs proxmox-backup-proxy[213]: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>

                                               <Error>

                                                   <Code>InternalError</Code>

                                                   <Message>An internal error occurred.  Please retry your upload.</Message>

                                               </Error>Aug 06 18:15:38 pbs proxmox-backup-proxy[213]: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>

                                               <Error>

                                                   <Code>InternalError</Code>

                                                   <Message>An internal error occurred.  Please retry your upload.</Message>

                                               </Error>

Aug 06 18:15:39 pbs proxmox-backup-proxy[213]: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>

                                               <Error>

                                                   <Code>InternalError</Code>

                                                   <Message>An internal error occurred.  Please retry your upload.</Message>

                                               </Error>

Around when it started failing. It repeated itself a few more times.
 
<Message>An internal error occurred. Please retry your upload.</Message>
But this would indicate that the error is with backblaze, and the PBS implementation of the s3 client does retry upload up to 3 times as you see in the error logs. Best to reach out to your provider, they might see what is causing the internal error.