PBS4.1 with DeuxFleurs Garage S3 backend storage - feedback and tweaks

dot.

New Member
Apr 21, 2026
3
3
1
Hello dear Community,

I am running a PBS 4.1 server (community repo) which stores backups from a PVE 9.1 HA cluster (also community repo) to an on-prem DeuxFleurs Garage S3 storage cluster. The PBS has a local cache made of an RAID5 array which has a usable size of about 130GB.

I experienced a weird behaviour in that small-ish (50-70GB VMs) got backed up perfectly, but all backups of large (1TB+ VMs) eventually failed. This was consistent.

All physical hosts are on the same site, physically connected over a fast network that is definitely not a bottleneck of any kind.

What this forum thread is NOT:
  • A complain of any sort. S3 support is still labeled as "tech-preview" and shall be handled as such
  • A "pls-fix-asap" request to the dev team. Again, S3 support is tech preview
  • An attempt to get free support since I'm too cheap to buy commercial support (okay, my infrastructure is a private, home deployment, but anyway...)
I just wanted to share the workaround I implemented, it might help others...

So, there are 2 tweaks to implement:
  • On PBS, you need to enable the skip-if-none-match-header. From the CLI, enter:
proxmox-backup-manager s3 endpoint update garage-s3-ep --provider-quirks skip-if-none-match-header

  • On the Garage S3 gateway node (which accepts the client requests), you have to modify its nginx config file (/etc/nginx/sites-available/my_s3_example.conf) to change its buffering proxy to a streaming proxy behaviour by modifying the following section:
Code:
location / {
    proxy_pass http://my_S3_gateway_node;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_max_temp_file_size 0;
    proxy_connect_timeout 600s;
    proxy_send_timeout 3600s;
    proxy_read_timeout 3600s;
    proxy_http_version 1.1;
    proxy_request_buffering off;
}


After this, don't forget to restart the services (or reboot if you're unsure) and the backups to S3 will work like a charm!

Enjoy!

Denis
 
  • Like
Reactions: Johannes S
I experienced a weird behaviour in that small-ish (50-70GB VMs) got backed up perfectly, but all backups of large (1TB+ VMs) eventually failed. This was consistent.
Please post the output of proxmox-backup-manager version --verbose, the task logs from the backup job from both, the PVE and PBS host as well as the systemd journal from the failing backup job timeframe.
 
  • Like
Reactions: Johannes S
Hi Chris,

Unfortunately I upgraded both the PVE and PBS to the latest patchlevel (including new Kernel) yesterday: PBS is now on 4.1.7, PVE on 9.1.8.

What do you think, wouldn't it be more useful to re-run a proper test with these latest patches applied? I can easily remove the tweaks I have added and re-run a backup backup job that was failing before my tweaks. Just tell me what you'd like me to do, I'd be glad to provide you with whatever you need.


Here's the current PBS patch level anyway (running now), in case that still matters:

Code:
proxmox-backup                      4.0.0         running kernel: 6.17.13-3-pve
proxmox-backup-server               4.1.7-2       running version: 4.1.7
proxmox-kernel-helper               9.0.4
proxmox-kernel-6.17                 6.17.13-3
proxmox-kernel-6.17.13-3-pve-signed 6.17.13-3
proxmox-kernel-6.17.13-2-pve-signed 6.17.13-2
proxmox-kernel-6.14                 6.14.11-6
proxmox-kernel-6.14.11-6-pve-signed 6.14.11-6
proxmox-kernel-6.14.8-2-pve-signed  6.14.8-2
ifupdown2                           3.3.0-1+pmx12
libjs-extjs                         7.0.0-5
proxmox-backup-docs                 4.1.7-2
proxmox-backup-client               4.1.7-2
proxmox-mail-forward                1.0.2
proxmox-mini-journalreader          1.6
proxmox-offline-mirror-helper       0.7.3
proxmox-widget-toolkit              5.1.9
pve-xtermjs                         5.5.0-3
smartmontools                       7.4-pve1
zfsutils-linux                      2.4.1-pve1

Here's a quick excerpt from the logs showing the backup job failing:

Code:
2026-04-18T03:40:28+02:00: Caching of chunk c614e5f1168f0f605...
2026-04-18T03:40:28+02:00: Caching of chunk 70e5a767e78771513f7...
2026-04-18T03:40:29+02:00: Caching of chunk c3465ff081627a380362a...
2026-04-18T03:40:29+02:00: Upload new chunk c7226ad499616be814...
2026-04-18T03:40:29+02:00: Upload new chunk adf04066d01eaeffd87...
2026-04-18T03:40:29+02:00: Upload new chunk 7c1f03cc5023a40d38a2...
2026-04-18T03:40:29+02:00: Upload new chunk badf4ff9dc3c6dda4d...
2026-04-18T03:40:29+02:00: Upload new chunk d35c6a1c30ac388c1c...
2026-04-18T03:40:31+02:00: Caching of chunk 2d1012cd52837e85bcbaa...
2026-04-18T03:42:35+02:00: <html>
<head><title>504 Gateway Time-out</title></head>
<body>
<center><h1>504 Gateway Time-out</h1></center>
<hr><center>nginx</center>
</body>
</html>

2026-04-18T03:42:40+02:00: <html>
<head><title>504 Gateway Time-out</title></head>
<body>
<center><h1>504 Gateway Time-out</h1></center>
<hr><center>nginx</center>
</body>
</html>

Cheers,

Denis
 
  • Like
Reactions: Johannes S
  • Like
Reactions: Johannes S
I agree. Give me a few days, I'll try to setup a new PBS over the weekend and retest properly. I'll post the results here.
 
  • Like
Reactions: Johannes S