PBS 4.0 - Wasabi - POST /fixed_chunk: 400 Bad Request

matteo.ceriani

New Member
Jul 30, 2025
1
0
1
We are testing Wasabi as a datastore for push sync jobs.
After several tests, it seems that only the smaller VMs are able to complete the backup, while the larger ones fail after several minutes:

Code:
2025-08-11T15:32:30+02:00: Upload of new chunk 65067c8c26159f45fb0f9bf11ed0f6325d64362df20bf77ed294db390797eb53
2025-08-11T15:32:33+02:00: Upload of new chunk 3ac0aae4209f9939ffcce919a6cadff33cf236ac914f5b67bad435834cc70a98
2025-08-11T15:32:36+02:00: Upload of new chunk d7bad54a3056b8290fa8b846ef88e9d45d3f27891131d99e8568ba92393aa829
2025-08-11T15:32:38+02:00: Upload of new chunk d9f994f77abd192f8c46658c67c6a4f99a2c587ed6d67c759df47adac0843eb7
2025-08-11T15:32:41+02:00: Upload of new chunk f0499dd019efb608931bd16e45628d54f37dbac68bacd43efac746c6dafa67a1
2025-08-11T15:32:41+02:00: Upload of new chunk 1d3b7196daf717654b401e6b22e4c5bcced540b99cf8aab3e8cc1dc881bee248
2025-08-11T15:32:42+02:00: Upload of new chunk 80ae1a3e9c2a974cc44fafa5841d04388b233b21f92d35c3776139c753c1c678
2025-08-11T15:32:42+02:00: Upload of new chunk bb1e63fba17c5e9e86cec7d6ab64a9cba673361a618f2483a48737c69b7f0452
2025-08-11T15:32:42+02:00: Upload of new chunk b1ed67226ad4932a3b99c7d77ea23469f689058efa98b1e70b2f66899c1dd51f
2025-08-11T15:32:42+02:00: Upload of new chunk a2821e18935e783b46d7d00c8b887167f9d38dc8614bae1d0451e27e330d3e18
2025-08-11T15:32:44+02:00: Caching of chunk be6dd8db54cc525d93bfd88270a3841e7da78023f2a3b8b01b39065ee8fac0e4
2025-08-11T15:33:47+02:00: POST /fixed_chunk: 400 Bad Request: failed to upload chunk to s3 backend
2025-08-11T15:33:47+02:00: POST /fixed_chunk: 400 Bad Request: failed to upload chunk to s3 backend
2025-08-11T15:33:47+02:00: POST /fixed_chunk: 400 Bad Request: failed to upload chunk to s3 backend
2025-08-11T15:33:47+02:00: POST /fixed_chunk: 400 Bad Request: failed to upload chunk to s3 backend
2025-08-11T15:33:47+02:00: POST /fixed_chunk: 400 Bad Request: failed to upload chunk to s3 backend
2025-08-11T15:33:47+02:00: POST /fixed_chunk: 400 Bad Request: failed to upload chunk to s3 backend
2025-08-11T15:33:47+02:00: POST /fixed_chunk: 400 Bad Request: failed to upload chunk to s3 backend
2025-08-11T15:33:47+02:00: POST /fixed_chunk: 400 Bad Request: failed to upload chunk to s3 backend
2025-08-11T15:33:47+02:00: POST /fixed_chunk: 400 Bad Request: failed to upload chunk to s3 backend
2025-08-11T15:33:48+02:00: backup ended and finish failed: backup ended but finished flag is not set.
2025-08-11T15:33:48+02:00: removing unfinished backup
2025-08-11T15:33:49+02:00: removing backup snapshot "/mnt/datastore/s3cache/vm/102/2025-07-31T19:00:03Z"
2025-08-11T15:33:49+02:00: TASK ERROR: backup ended but finished flag is not set.

I tried varying the bandwidth by lowering it but it doesn't seem to have any effect.
 
I am experiencing the same issue with backblaze b2

Code:
----
2025-08-14T10:25:12+01:00: Upload of new chunk 6f549f64184ee51e7295c25d5058656afd19da6f7fad5f3dba62f8cf99848150
2025-08-14T10:25:12+01:00: Upload of new chunk e6ddfb2200d907b2f148c453e705078eb7c9d50195153041945dc56fff633b56
2025-08-14T10:25:12+01:00: Upload of new chunk 52f710084628fcc6b3bb238f52016e367b5ebbc05da1a7d0ab627085445ed267
2025-08-14T10:25:12+01:00: Upload of new chunk 3ab7b570d92833a6b4cbc88b3dd100ae47735481e5e22328c7ef968b896c8cd2
<CUT>
2025-08-14T10:26:56+01:00: Caching of chunk f1b377db6533f393943c92a77c48667d38bc1fdb98fb1cba111f1520c395f907
2025-08-14T10:27:01+01:00: Caching of chunk e6ddfb2200d907b2f148c453e705078eb7c9d50195153041945dc56fff633b56
2025-08-14T10:27:02+01:00: Caching of chunk 52f710084628fcc6b3bb238f52016e367b5ebbc05da1a7d0ab627085445ed267
2025-08-14T10:27:05+01:00: Caching of chunk 34f229794dcb86e8b0267d3a7e8b8a3d8d2484ecc76392a9a40cf2cd8326f67c
2025-08-14T10:27:08+01:00: Caching of chunk c2f6c8d57f90608704bf203a8b1394d96ad40992126a5db151a43e94352705b3
2025-08-14T10:27:08+01:00: POST /fixed_chunk: 400 Bad Request: failed to upload chunk to s3 backend
2025-08-14T10:27:08+01:00: backup ended and finish failed: backup ended but finished flag is not set.
2025-08-14T10:27:08+01:00: removing unfinished backup
2025-08-14T10:27:09+01:00: removing backup snapshot "/s3/vm/104/2025-08-14T09:25:03Z"
2025-08-14T10:27:09+01:00: TASK ERROR: backup ended but finished flag is not set.
 
I have similar issues where large VMs simply fail to sync to Wasabi, I have an additional error that pops up during the failure

Code:
25-08-14T20:45:38+10:00: sync group vm/105 failed - failed to upload chunk to s3 backend
2025-08-14T20:45:39+10:00: sync snapshot vm/106/2025-08-12T16:21:16Z
2025-08-14T20:45:39+10:00: sync archive qemu-server.conf.blob
2025-08-14T20:45:40+10:00: sync archive drive-scsi1.img.fidx
2025-08-14T20:55:18+10:00: cleanup error - deleting objects failed
2025-08-14T20:55:18+10:00: percentage done: 16.67% (4/27 groups, 1/2 snapshots in group #5)
2025-08-14T20:55:18+10:00: sync group vm/106 failed - failed to upload chunk to s3 backend
2025-08-14T20:55:20+10:00: sync snapshot vm/107/2025-08-12T12:00:01Z
2025-08-14T20:55:20+10:00: sync archive qemu-server.conf.blob
2025-08-14T20:55:20+10:00: sync archive drive-scsi0.img.fidx
2025-08-14T22:21:02+10:00: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>3A0D175B8DB99FF3:A</RequestId><HostId>o4cTmU1eInbJaCYFLuBKIqod6ydMKv2Aq8I9UXuxttO0TJGqR4ND4o/8Z+H3Fx5r+GGJhgg/U5WC</HostId><CMReferenceId>MTc1NTE3NDA2MjMxMCAyMDYuMTQ4LjUuMTAwIENvbklEOjQzOTEzODkwMDYvRW5naW5lQ29uSUQ6NDI0NDUzNDkvQ29yZToy</CMReferenceId></Error>
2025-08-14T22:21:02+10:00: cleanup error - unexpected status code 403 Forbidden
 
We are testing Wasabi as a datastore for push sync jobs.
After several tests, it seems that only the smaller VMs are able to complete the backup, while the larger ones fail after several minutes:

Code:
2025-08-11T15:32:30+02:00: Upload of new chunk 65067c8c26159f45fb0f9bf11ed0f6325d64362df20bf77ed294db390797eb53
2025-08-11T15:32:33+02:00: Upload of new chunk 3ac0aae4209f9939ffcce919a6cadff33cf236ac914f5b67bad435834cc70a98
2025-08-11T15:32:36+02:00: Upload of new chunk d7bad54a3056b8290fa8b846ef88e9d45d3f27891131d99e8568ba92393aa829
2025-08-11T15:32:38+02:00: Upload of new chunk d9f994f77abd192f8c46658c67c6a4f99a2c587ed6d67c759df47adac0843eb7
2025-08-11T15:32:41+02:00: Upload of new chunk f0499dd019efb608931bd16e45628d54f37dbac68bacd43efac746c6dafa67a1
2025-08-11T15:32:41+02:00: Upload of new chunk 1d3b7196daf717654b401e6b22e4c5bcced540b99cf8aab3e8cc1dc881bee248
2025-08-11T15:32:42+02:00: Upload of new chunk 80ae1a3e9c2a974cc44fafa5841d04388b233b21f92d35c3776139c753c1c678
2025-08-11T15:32:42+02:00: Upload of new chunk bb1e63fba17c5e9e86cec7d6ab64a9cba673361a618f2483a48737c69b7f0452
2025-08-11T15:32:42+02:00: Upload of new chunk b1ed67226ad4932a3b99c7d77ea23469f689058efa98b1e70b2f66899c1dd51f
2025-08-11T15:32:42+02:00: Upload of new chunk a2821e18935e783b46d7d00c8b887167f9d38dc8614bae1d0451e27e330d3e18
2025-08-11T15:32:44+02:00: Caching of chunk be6dd8db54cc525d93bfd88270a3841e7da78023f2a3b8b01b39065ee8fac0e4
2025-08-11T15:33:47+02:00: POST /fixed_chunk: 400 Bad Request: failed to upload chunk to s3 backend
2025-08-11T15:33:47+02:00: POST /fixed_chunk: 400 Bad Request: failed to upload chunk to s3 backend
2025-08-11T15:33:47+02:00: POST /fixed_chunk: 400 Bad Request: failed to upload chunk to s3 backend
2025-08-11T15:33:47+02:00: POST /fixed_chunk: 400 Bad Request: failed to upload chunk to s3 backend
2025-08-11T15:33:47+02:00: POST /fixed_chunk: 400 Bad Request: failed to upload chunk to s3 backend
2025-08-11T15:33:47+02:00: POST /fixed_chunk: 400 Bad Request: failed to upload chunk to s3 backend
2025-08-11T15:33:47+02:00: POST /fixed_chunk: 400 Bad Request: failed to upload chunk to s3 backend
2025-08-11T15:33:47+02:00: POST /fixed_chunk: 400 Bad Request: failed to upload chunk to s3 backend
2025-08-11T15:33:47+02:00: POST /fixed_chunk: 400 Bad Request: failed to upload chunk to s3 backend
2025-08-11T15:33:48+02:00: backup ended and finish failed: backup ended but finished flag is not set.
2025-08-11T15:33:48+02:00: removing unfinished backup
2025-08-11T15:33:49+02:00: removing backup snapshot "/mnt/datastore/s3cache/vm/102/2025-07-31T19:00:03Z"
2025-08-11T15:33:49+02:00: TASK ERROR: backup ended but finished flag is not set.

I tried varying the bandwidth by lowering it but it doesn't seem to have any effect.
Hi,
do you see any additional error information in the systemd journal while the chunk upload fails? Are you maybe running into request limits? Also, how big is your local cache and how much storage usage do you have there?
 
I am experiencing the same issue with backblaze b2

Code:
----
2025-08-14T10:25:12+01:00: Upload of new chunk 6f549f64184ee51e7295c25d5058656afd19da6f7fad5f3dba62f8cf99848150
2025-08-14T10:25:12+01:00: Upload of new chunk e6ddfb2200d907b2f148c453e705078eb7c9d50195153041945dc56fff633b56
2025-08-14T10:25:12+01:00: Upload of new chunk 52f710084628fcc6b3bb238f52016e367b5ebbc05da1a7d0ab627085445ed267
2025-08-14T10:25:12+01:00: Upload of new chunk 3ab7b570d92833a6b4cbc88b3dd100ae47735481e5e22328c7ef968b896c8cd2
<CUT>
2025-08-14T10:26:56+01:00: Caching of chunk f1b377db6533f393943c92a77c48667d38bc1fdb98fb1cba111f1520c395f907
2025-08-14T10:27:01+01:00: Caching of chunk e6ddfb2200d907b2f148c453e705078eb7c9d50195153041945dc56fff633b56
2025-08-14T10:27:02+01:00: Caching of chunk 52f710084628fcc6b3bb238f52016e367b5ebbc05da1a7d0ab627085445ed267
2025-08-14T10:27:05+01:00: Caching of chunk 34f229794dcb86e8b0267d3a7e8b8a3d8d2484ecc76392a9a40cf2cd8326f67c
2025-08-14T10:27:08+01:00: Caching of chunk c2f6c8d57f90608704bf203a8b1394d96ad40992126a5db151a43e94352705b3
2025-08-14T10:27:08+01:00: POST /fixed_chunk: 400 Bad Request: failed to upload chunk to s3 backend
2025-08-14T10:27:08+01:00: backup ended and finish failed: backup ended but finished flag is not set.
2025-08-14T10:27:08+01:00: removing unfinished backup
2025-08-14T10:27:09+01:00: removing backup snapshot "/s3/vm/104/2025-08-14T09:25:03Z"
2025-08-14T10:27:09+01:00: TASK ERROR: backup ended but finished flag is not set.
Same questions as asked for @matteo.ceriani Especially, check the systemd journal to see if there is more information on the error.
 
I have similar issues where large VMs simply fail to sync to Wasabi, I have an additional error that pops up during the failure

Code:
25-08-14T20:45:38+10:00: sync group vm/105 failed - failed to upload chunk to s3 backend
2025-08-14T20:45:39+10:00: sync snapshot vm/106/2025-08-12T16:21:16Z
2025-08-14T20:45:39+10:00: sync archive qemu-server.conf.blob
2025-08-14T20:45:40+10:00: sync archive drive-scsi1.img.fidx
2025-08-14T20:55:18+10:00: cleanup error - deleting objects failed
2025-08-14T20:55:18+10:00: percentage done: 16.67% (4/27 groups, 1/2 snapshots in group #5)
2025-08-14T20:55:18+10:00: sync group vm/106 failed - failed to upload chunk to s3 backend
2025-08-14T20:55:20+10:00: sync snapshot vm/107/2025-08-12T12:00:01Z
2025-08-14T20:55:20+10:00: sync archive qemu-server.conf.blob
2025-08-14T20:55:20+10:00: sync archive drive-scsi0.img.fidx
2025-08-14T22:21:02+10:00: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>3A0D175B8DB99FF3:A</RequestId><HostId>o4cTmU1eInbJaCYFLuBKIqod6ydMKv2Aq8I9UXuxttO0TJGqR4ND4o/8Z+H3Fx5r+GGJhgg/U5WC</HostId><CMReferenceId>MTc1NTE3NDA2MjMxMCAyMDYuMTQ4LjUuMTAwIENvbklEOjQzOTEzODkwMDYvRW5naW5lQ29uSUQ6NDI0NDUzNDkvQ29yZToy</CMReferenceId></Error>
2025-08-14T22:21:02+10:00: cleanup error - unexpected status code 403 Forbidden
A status code of 403, forbidden most likely points to you running into free tier limits on either request or storage space. Can you exclude this?
 
Same questions as asked for @matteo.ceriani Especially, check the systemd journal to see if there is more information on the error.
Cache is 100Gb, 0% usage. No limits as I don't use the free tier. This is everything from journalctl

Code:
Aug 18 09:21:35 hopper pvedaemon[4113919]: <root@pam> starting task UPID:hopper:003ED97A:0635F1A9:68A2E28F:vzdump:104:root@pam:
Aug 18 09:21:35 hopper pvedaemon[4118906]: INFO: starting new backup job: vzdump 104 --notes-template '{{guestname}}' --notification-mode notification-system --remove 0 --mode snapshot --node hopper --storage Neo-B2-Backups
Aug 18 09:21:35 hopper pvedaemon[4118906]: INFO: Starting Backup of VM 104 (qemu)
Aug 18 09:21:46 hopper pvedaemon[4119035]: starting termproxy UPID:hopper:003ED9FB:0635F5CE:68A2E29A:vncshell::root@pam:
Aug 18 09:21:46 hopper pvedaemon[4113919]: <root@pam> starting task UPID:hopper:003ED9FB:0635F5CE:68A2E29A:vncshell::root@pam:
Aug 18 09:21:46 hopper pvedaemon[4115539]: <root@pam> successful auth for user 'root@pam'
Aug 18 09:21:46 hopper login[4119038]: pam_unix(login:session): session opened for user root(uid=0) by root(uid=0)
Aug 18 09:21:46 hopper systemd[1]: Created slice user-0.slice - User Slice of UID 0.
Aug 18 09:21:46 hopper systemd[1]: Starting user-runtime-dir@0.service - User Runtime Directory /run/user/0...
Aug 18 09:21:46 hopper systemd-logind[1767]: New session 357 of user root.
Aug 18 09:21:46 hopper systemd[1]: Finished user-runtime-dir@0.service - User Runtime Directory /run/user/0.
Aug 18 09:21:46 hopper systemd[1]: Starting user@0.service - User Manager for UID 0...
Aug 18 09:21:46 hopper (systemd)[4119046]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Aug 18 09:21:46 hopper systemd-logind[1767]: New session 358 of user root.
Aug 18 09:21:46 hopper systemd[4119046]: Queued start job for default target default.target.
Aug 18 09:21:46 hopper systemd[4119046]: Created slice app.slice - User Application Slice.
Aug 18 09:21:46 hopper systemd[4119046]: Reached target paths.target - Paths.
Aug 18 09:21:46 hopper systemd[4119046]: Reached target timers.target - Timers.
Aug 18 09:21:46 hopper systemd[4119046]: Listening on dirmngr.socket - GnuPG network certificate management daemon.
Aug 18 09:21:46 hopper systemd[4119046]: Listening on gpg-agent-browser.socket - GnuPG cryptographic agent and passphrase cache (access for web browsers).
Aug 18 09:21:46 hopper systemd[4119046]: Listening on gpg-agent-extra.socket - GnuPG cryptographic agent and passphrase cache (restricted).
Aug 18 09:21:46 hopper systemd[4119046]: Starting gpg-agent-ssh.socket - GnuPG cryptographic agent (ssh-agent emulation)...
Aug 18 09:21:46 hopper systemd[4119046]: Starting gpg-agent.socket - GnuPG cryptographic agent and passphrase cache...
Aug 18 09:21:46 hopper systemd[4119046]: Listening on keyboxd.socket - GnuPG public key management service.
Aug 18 09:21:46 hopper systemd[4119046]: Starting ssh-agent.socket - OpenSSH Agent socket...
Aug 18 09:21:46 hopper systemd[4119046]: Listening on gpg-agent-ssh.socket - GnuPG cryptographic agent (ssh-agent emulation).
Aug 18 09:21:46 hopper systemd[4119046]: Listening on gpg-agent.socket - GnuPG cryptographic agent and passphrase cache.
Aug 18 09:21:46 hopper systemd[4119046]: Listening on ssh-agent.socket - OpenSSH Agent socket.
Aug 18 09:21:46 hopper systemd[4119046]: Reached target sockets.target - Sockets.
Aug 18 09:21:46 hopper systemd[4119046]: Reached target basic.target - Basic System.
Aug 18 09:21:46 hopper systemd[4119046]: Reached target default.target - Main User Target.
Aug 18 09:24:42 hopper pvedaemon[4118906]: ERROR: Backup of VM 104 failed - backup write data failed: command error: write_data upload error: pipelined request failed: failed to upload chunk to s3 backend
Aug 18 09:24:42 hopper pvedaemon[4118906]: INFO: Backup job finished with errors
Aug 18 09:24:42 hopper perl[4118906]: notified via target `mail-to-root`
Aug 18 09:24:42 hopper pvedaemon[4118906]: job errors
Aug 18 09:24:42 hopper pvedaemon[4113919]: <root@pam> end task UPID:hopper:003ED97A:0635F1A9:68A2E28F:vzdump:104:root@pam: job errors
 
Cache is 100Gb, 0% usage. No limits as I don't use the free tier. This is everything from journalctl
Can you check the output of find <path-to-cache-store>/.chunks -type f -print | wc -l. I would expect to see at least some chunks in the local cache store.

Also, please check if setting a put request limit as described here helps to workaround the issue.
 
Also, please check if setting a put request limit as described here helps to workaround the issue.
I tried this with the same result, I think it might have run for slightly longer, but this could be because it was uploading slower (if that makes sense)

find /s3/.chunks -type f -print | wc -l
28
 
What is your expected max throughput to the s3 storage provider?
Limits:
Uploads are limited to 50 requests and 100MB per second, while downloads are limited to 20 requests and 25MB per second for users with less than 10TB of data.

Just tried with a rate limit of 10 and still got the errors :(
 
Uploads are limited to 50 requests and 100MB per second, while downloads are limited to 20 requests and 25MB per second for users with less than 10TB of data.
Where/how do you enforce these limits?

Just tried with a rate limit of 10 and still got the errors :(
I assume you mean 10 requests per second? or do you mean a rate in of 10M per second? Please also test with 1 request per second as set via the put-rate-limit, just to be sure...
 
Where/how do you enforce these limits?
I am not sure what you mean by this? These are the limits set by backblaze b2.

Please also test with 1 request per second as set via the put-rate-limit
Just set this, same issue

From the backup:

Code:
INFO: starting new backup job: vzdump 104 --mode snapshot --remove 0 --notes-template '{{guestname}}' --notification-mode notification-system --storage Neo-B2-Backups --node hopper
INFO: Starting Backup of VM 104 (qemu)
INFO: Backup started at 2025-08-18 11:11:23
INFO: status = running
INFO: VM Name: Leia-WinSvr
INFO: include disk 'virtio0' 'local-zfs:vm-104-disk-1' 70G
INFO: exclude disk 'virtio1' 'local-SATA:vm-104-disk-0' (backup=no)
INFO: exclude disk 'virtio2' 'local-zfs:vm-104-disk-2' (backup=no)
INFO: include disk 'efidisk0' 'local-zfs:vm-104-disk-0' 1M
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/104/2025-08-18T10:11:23Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task 'b6438486-0bd9-492e-8c23-f4a5ed786b19'
INFO: resuming VM again
INFO: efidisk0: dirty-bitmap status: existing bitmap was invalid and has been cleared
INFO: virtio0: dirty-bitmap status: existing bitmap was invalid and has been cleared
INFO:   0% (428.0 MiB of 70.0 GiB) in 3s, read: 142.7 MiB/s, write: 134.7 MiB/s
INFO:   0% (456.0 MiB of 70.0 GiB) in 3m 26s, read: 141.2 KiB/s, write: 141.2 KiB/s
ERROR: backup write data failed: command error: write_data upload error: pipelined request failed: failed to upload chunk to s3 backend
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 104 failed - backup write data failed: command error: write_data upload error: pipelined request failed: failed to upload chunk to s3 backend
INFO: Failed at 2025-08-18 11:14:55
INFO: Backup job finished with errors
INFO: notified via target `mail-to-root`
TASK ERROR: job errors
 
Tried to reproduce your issue with a datastore backed by backblaze and backed up a VM with about 600MiB of chunks, but the backup ran without issues.

Please try the following:
  • set the datastore into maintenance mode offline (best via WebUI by <datastore> > Options > Maintenance Mode)
  • clear all currently cached chunks by running find <path-to-store-cache>/.chunks/ -type f -delete
  • clear the datastore maintenance mode again
  • Retry the backup
  • Check and post the systemd journal for a line which should tell the cache capacity, e.g. Using datastore cache with capacity 2981 for store backblaze-b2
 
Tried to reproduce your issue with a datastore backed by backblaze and backed up a VM with about 600MiB of chunks, but the backup ran without issues.

Please try the following:
  • set the datastore into maintenance mode offline (best via WebUI by <datastore> > Options > Maintenance Mode)
  • clear all currently cached chunks by running find <path-to-store-cache>/.chunks/ -type f -delete
  • clear the datastore maintenance mode again
  • Retry the backup
  • Check and post the systemd journal for a line which should tell the cache capacity, e.g. Using datastore cache with capacity 2981 for store backblaze-b2
Ok, run it again after following the instructions.

This is the output: neo proxmox-backup-proxy[48866]: Using datastore cache with capacity 5926 for store B2Backups

my s3.cfg looks like this in case I've missed something obvious.

Code:
s3-endpoint: NeoPBSHome
        access-key xxxxxxxx
        endpoint {{bucket}}.s3.{{region}}.backblazeb2.com
        provider-quirks skip-if-none-match-header
        region us-west-004
        secret-key xxxxxxx

Same issue
 
Okay, after severely limiting my upload speed to about 20kbit/s I'm now able to reproduce this issue, my previous limits were simply to high. Seems that because of the slow speed, the chunk upload will fail because of the request timeout. Unfortunately, there is no easy workaround for the time being.
 
Okay, after severely limiting my upload speed to about 20kbit/s I'm now able to reproduce this issue, my previous limits were simply to high. Seems that because of the slow speed, the chunk upload will fail because of the request timeout. Unfortunately, there is no easy workaround for the time being.
That's strange, I have 18Mbps+ upload, so it shouldn't be a speed issue unless backblaze are doing something to the traffic :(
 
That's strange, I have 18Mbps+ upload, so it shouldn't be a speed issue unless backblaze are doing something to the traffic :(
But the traffic is shared, might be that you cannot get the same bandwidth continuously? For a 4MiB chunk the average upload rate limit to run into the timeout should be below 70kB/s, since chunks are compressed this bandwidth threshold is even lower. Nevertheless, given my reproducer I'm rather confident that the request timeout is the issue and am working on a fix for that.
 
  • Like
Reactions: complexplaster27
For reference: got positive feedback for setting the put-rate-limit helping at least for some cases, please see https://forum.proxmox.com/threads/s3-upload-fails-with-large-vms-cache-issue.169432/post-793600 So do try that if running into this issue.
Thanks Chris, sorry I've been on leave from work so I couldn't get back to this thread in a timely fashion. Testing this now as my issues seem to be correlated with the large VM, usually the VMs that are above 1 TB just fail to sync to wasabi.

EDIT: Yes put rate limit definitely seems to resolve the issue, the sync has been going for 22 hours with no failures and I can see the VMs are starting to populate in the datastore.
 
Last edited: