Rclone serve S3 / GC fails - Deadline has elapsed

flo_m

New Member
Aug 20, 2025
5
0
1
Hi!

I'm serving my pcloud storage with rcloud as an s3 storage.
This seems to be working fine, the sync jobs do upload fine, also the verificaiton and prune is working.

But the Garbage Collection Job fails constantly in phase 2:
Code:
ProxmoxBackup Server 4.0.14
2025-08-20T12:26:21+02:00: starting garbage collection on store pcloud
2025-08-20T12:26:22+02:00: Access time update check successful, proceeding with GC.
2025-08-20T12:26:22+02:00: Using access time cutoff 1d 5m, minimum access time is 2025-08-19T10:21:21Z
2025-08-20T12:26:22+02:00: Start GC phase1 (mark used chunks)
2025-08-20T12:26:22+02:00: marked 1% (2 of 132 index files)
2025-08-20T12:26:22+02:00: marked 2% (3 of 132 index files)
2025-08-20T12:26:22+02:00: marked 3% (4 of 132 index files)
2025-08-20T12:26:22+02:00: marked 4% (6 of 132 index files)
2025-08-20T12:26:22+02:00: marked 5% (7 of 132 index files)
2025-08-20T12:26:22+02:00: marked 6% (8 of 132 index files)
2025-08-20T12:26:22+02:00: marked 7% (10 of 132 index files)
2025-08-20T12:26:22+02:00: marked 8% (11 of 132 index files)
2025-08-20T12:26:22+02:00: marked 9% (12 of 132 index files)
2025-08-20T12:26:22+02:00: marked 10% (14 of 132 index files)
2025-08-20T12:26:22+02:00: marked 11% (15 of 132 index files)
2025-08-20T12:26:22+02:00: marked 12% (16 of 132 index files)
2025-08-20T12:26:22+02:00: marked 13% (18 of 132 index files)
2025-08-20T12:26:22+02:00: marked 14% (19 of 132 index files)
2025-08-20T12:26:22+02:00: marked 15% (20 of 132 index files)
2025-08-20T12:26:22+02:00: marked 16% (22 of 132 index files)
2025-08-20T12:26:22+02:00: marked 17% (23 of 132 index files)
2025-08-20T12:26:22+02:00: marked 18% (24 of 132 index files)
2025-08-20T12:26:22+02:00: marked 19% (26 of 132 index files)
2025-08-20T12:26:22+02:00: marked 20% (27 of 132 index files)
2025-08-20T12:26:22+02:00: marked 21% (28 of 132 index files)
2025-08-20T12:26:22+02:00: marked 22% (30 of 132 index files)
2025-08-20T12:26:22+02:00: marked 23% (31 of 132 index files)
2025-08-20T12:26:22+02:00: marked 24% (32 of 132 index files)
2025-08-20T12:26:22+02:00: marked 25% (33 of 132 index files)
2025-08-20T12:26:22+02:00: marked 26% (35 of 132 index files)
2025-08-20T12:26:22+02:00: marked 27% (36 of 132 index files)
2025-08-20T12:26:22+02:00: marked 28% (37 of 132 index files)
2025-08-20T12:26:22+02:00: marked 29% (39 of 132 index files)
2025-08-20T12:26:22+02:00: marked 30% (40 of 132 index files)
2025-08-20T12:26:22+02:00: marked 31% (41 of 132 index files)
2025-08-20T12:26:22+02:00: marked 32% (43 of 132 index files)
2025-08-20T12:26:22+02:00: marked 33% (44 of 132 index files)
2025-08-20T12:26:22+02:00: marked 34% (45 of 132 index files)
2025-08-20T12:26:22+02:00: marked 35% (47 of 132 index files)
2025-08-20T12:26:22+02:00: marked 36% (48 of 132 index files)
2025-08-20T12:26:22+02:00: marked 37% (49 of 132 index files)
2025-08-20T12:26:22+02:00: marked 38% (51 of 132 index files)
2025-08-20T12:26:22+02:00: marked 39% (52 of 132 index files)
2025-08-20T12:26:22+02:00: marked 40% (53 of 132 index files)
2025-08-20T12:26:22+02:00: marked 41% (55 of 132 index files)
2025-08-20T12:26:22+02:00: marked 42% (56 of 132 index files)
2025-08-20T12:26:22+02:00: marked 43% (57 of 132 index files)
2025-08-20T12:26:22+02:00: marked 44% (59 of 132 index files)
2025-08-20T12:26:22+02:00: marked 45% (60 of 132 index files)
2025-08-20T12:26:22+02:00: marked 46% (61 of 132 index files)
2025-08-20T12:26:22+02:00: marked 47% (63 of 132 index files)
2025-08-20T12:26:22+02:00: marked 48% (64 of 132 index files)
2025-08-20T12:26:22+02:00: marked 49% (65 of 132 index files)
2025-08-20T12:26:22+02:00: marked 50% (66 of 132 index files)
2025-08-20T12:26:22+02:00: marked 51% (68 of 132 index files)
2025-08-20T12:26:22+02:00: marked 52% (69 of 132 index files)
2025-08-20T12:26:22+02:00: marked 53% (70 of 132 index files)
2025-08-20T12:26:22+02:00: marked 54% (72 of 132 index files)
2025-08-20T12:26:22+02:00: marked 55% (73 of 132 index files)
2025-08-20T12:26:22+02:00: marked 56% (74 of 132 index files)
2025-08-20T12:26:22+02:00: marked 57% (76 of 132 index files)
2025-08-20T12:26:22+02:00: marked 58% (77 of 132 index files)
2025-08-20T12:26:22+02:00: marked 59% (78 of 132 index files)
2025-08-20T12:26:22+02:00: marked 60% (80 of 132 index files)
2025-08-20T12:26:22+02:00: marked 61% (81 of 132 index files)
2025-08-20T12:26:22+02:00: marked 62% (82 of 132 index files)
2025-08-20T12:26:22+02:00: marked 63% (84 of 132 index files)
2025-08-20T12:26:22+02:00: marked 64% (85 of 132 index files)
2025-08-20T12:26:22+02:00: marked 65% (86 of 132 index files)
2025-08-20T12:26:22+02:00: marked 66% (88 of 132 index files)
2025-08-20T12:26:22+02:00: marked 67% (89 of 132 index files)
2025-08-20T12:26:22+02:00: marked 68% (90 of 132 index files)
2025-08-20T12:26:22+02:00: marked 69% (92 of 132 index files)
2025-08-20T12:26:23+02:00: marked 70% (93 of 132 index files)
2025-08-20T12:26:23+02:00: marked 71% (94 of 132 index files)
2025-08-20T12:26:23+02:00: marked 72% (96 of 132 index files)
2025-08-20T12:26:23+02:00: marked 73% (97 of 132 index files)
2025-08-20T12:26:23+02:00: marked 74% (98 of 132 index files)
2025-08-20T12:26:23+02:00: marked 75% (99 of 132 index files)
2025-08-20T12:26:23+02:00: marked 76% (101 of 132 index files)
2025-08-20T12:26:23+02:00: marked 77% (102 of 132 index files)
2025-08-20T12:26:23+02:00: marked 78% (103 of 132 index files)
2025-08-20T12:26:23+02:00: marked 79% (105 of 132 index files)
2025-08-20T12:26:23+02:00: marked 80% (106 of 132 index files)
2025-08-20T12:26:23+02:00: marked 81% (107 of 132 index files)
2025-08-20T12:26:23+02:00: marked 82% (109 of 132 index files)
2025-08-20T12:26:23+02:00: marked 83% (110 of 132 index files)
2025-08-20T12:26:23+02:00: marked 84% (111 of 132 index files)
2025-08-20T12:26:23+02:00: marked 85% (113 of 132 index files)
2025-08-20T12:26:23+02:00: marked 86% (114 of 132 index files)
2025-08-20T12:26:23+02:00: marked 87% (115 of 132 index files)
2025-08-20T12:26:23+02:00: marked 88% (117 of 132 index files)
2025-08-20T12:26:23+02:00: marked 89% (118 of 132 index files)
2025-08-20T12:26:23+02:00: marked 90% (119 of 132 index files)
2025-08-20T12:26:23+02:00: marked 91% (121 of 132 index files)
2025-08-20T12:26:23+02:00: marked 92% (122 of 132 index files)
2025-08-20T12:26:23+02:00: marked 93% (123 of 132 index files)
2025-08-20T12:26:23+02:00: marked 94% (125 of 132 index files)
2025-08-20T12:26:23+02:00: marked 95% (126 of 132 index files)
2025-08-20T12:26:23+02:00: marked 96% (127 of 132 index files)
2025-08-20T12:26:23+02:00: marked 97% (129 of 132 index files)
2025-08-20T12:26:23+02:00: marked 98% (130 of 132 index files)
2025-08-20T12:26:23+02:00: marked 99% (131 of 132 index files)
2025-08-20T12:26:23+02:00: marked 100% (132 of 132 index files)
2025-08-20T12:26:23+02:00: Start GC phase2 (sweep unused chunks)
2025-08-20T12:27:23+02:00: queued notification (id=48bec03d-2dac-4ee0-a27f-6905535a5415)
2025-08-20T12:27:23+02:00: TASK ERROR: failed to list chunk in s3 object store: request timeout: deadline has elapsed

Is this because of the limitation specified here https://rclone.org/commands/rclone_serve_s3/#limitations
When using ListObjects, rclone will use / when the delimiter is empty. This reduces backend requests with no effect on most operations, but if the delimiter is something other than / and empty, rclone will do a full recursive search of the backend, which can take some time.


Can we perhaps increase the timeout?
 
Hi!

I'm serving my pcloud storage with rcloud as an s3 storage.
This seems to be working fine, the sync jobs do upload fine, also the verificaiton and prune is working.

But the Garbage Collection Job fails constantly in phase 2:
Code:
ProxmoxBackup Server 4.0.14
2025-08-20T12:26:21+02:00: starting garbage collection on store pcloud
2025-08-20T12:26:22+02:00: Access time update check successful, proceeding with GC.
2025-08-20T12:26:22+02:00: Using access time cutoff 1d 5m, minimum access time is 2025-08-19T10:21:21Z
2025-08-20T12:26:22+02:00: Start GC phase1 (mark used chunks)
2025-08-20T12:26:22+02:00: marked 1% (2 of 132 index files)
2025-08-20T12:26:22+02:00: marked 2% (3 of 132 index files)
2025-08-20T12:26:22+02:00: marked 3% (4 of 132 index files)
2025-08-20T12:26:22+02:00: marked 4% (6 of 132 index files)
2025-08-20T12:26:22+02:00: marked 5% (7 of 132 index files)
2025-08-20T12:26:22+02:00: marked 6% (8 of 132 index files)
2025-08-20T12:26:22+02:00: marked 7% (10 of 132 index files)
2025-08-20T12:26:22+02:00: marked 8% (11 of 132 index files)
2025-08-20T12:26:22+02:00: marked 9% (12 of 132 index files)
2025-08-20T12:26:22+02:00: marked 10% (14 of 132 index files)
2025-08-20T12:26:22+02:00: marked 11% (15 of 132 index files)
2025-08-20T12:26:22+02:00: marked 12% (16 of 132 index files)
2025-08-20T12:26:22+02:00: marked 13% (18 of 132 index files)
2025-08-20T12:26:22+02:00: marked 14% (19 of 132 index files)
2025-08-20T12:26:22+02:00: marked 15% (20 of 132 index files)
2025-08-20T12:26:22+02:00: marked 16% (22 of 132 index files)
2025-08-20T12:26:22+02:00: marked 17% (23 of 132 index files)
2025-08-20T12:26:22+02:00: marked 18% (24 of 132 index files)
2025-08-20T12:26:22+02:00: marked 19% (26 of 132 index files)
2025-08-20T12:26:22+02:00: marked 20% (27 of 132 index files)
2025-08-20T12:26:22+02:00: marked 21% (28 of 132 index files)
2025-08-20T12:26:22+02:00: marked 22% (30 of 132 index files)
2025-08-20T12:26:22+02:00: marked 23% (31 of 132 index files)
2025-08-20T12:26:22+02:00: marked 24% (32 of 132 index files)
2025-08-20T12:26:22+02:00: marked 25% (33 of 132 index files)
2025-08-20T12:26:22+02:00: marked 26% (35 of 132 index files)
2025-08-20T12:26:22+02:00: marked 27% (36 of 132 index files)
2025-08-20T12:26:22+02:00: marked 28% (37 of 132 index files)
2025-08-20T12:26:22+02:00: marked 29% (39 of 132 index files)
2025-08-20T12:26:22+02:00: marked 30% (40 of 132 index files)
2025-08-20T12:26:22+02:00: marked 31% (41 of 132 index files)
2025-08-20T12:26:22+02:00: marked 32% (43 of 132 index files)
2025-08-20T12:26:22+02:00: marked 33% (44 of 132 index files)
2025-08-20T12:26:22+02:00: marked 34% (45 of 132 index files)
2025-08-20T12:26:22+02:00: marked 35% (47 of 132 index files)
2025-08-20T12:26:22+02:00: marked 36% (48 of 132 index files)
2025-08-20T12:26:22+02:00: marked 37% (49 of 132 index files)
2025-08-20T12:26:22+02:00: marked 38% (51 of 132 index files)
2025-08-20T12:26:22+02:00: marked 39% (52 of 132 index files)
2025-08-20T12:26:22+02:00: marked 40% (53 of 132 index files)
2025-08-20T12:26:22+02:00: marked 41% (55 of 132 index files)
2025-08-20T12:26:22+02:00: marked 42% (56 of 132 index files)
2025-08-20T12:26:22+02:00: marked 43% (57 of 132 index files)
2025-08-20T12:26:22+02:00: marked 44% (59 of 132 index files)
2025-08-20T12:26:22+02:00: marked 45% (60 of 132 index files)
2025-08-20T12:26:22+02:00: marked 46% (61 of 132 index files)
2025-08-20T12:26:22+02:00: marked 47% (63 of 132 index files)
2025-08-20T12:26:22+02:00: marked 48% (64 of 132 index files)
2025-08-20T12:26:22+02:00: marked 49% (65 of 132 index files)
2025-08-20T12:26:22+02:00: marked 50% (66 of 132 index files)
2025-08-20T12:26:22+02:00: marked 51% (68 of 132 index files)
2025-08-20T12:26:22+02:00: marked 52% (69 of 132 index files)
2025-08-20T12:26:22+02:00: marked 53% (70 of 132 index files)
2025-08-20T12:26:22+02:00: marked 54% (72 of 132 index files)
2025-08-20T12:26:22+02:00: marked 55% (73 of 132 index files)
2025-08-20T12:26:22+02:00: marked 56% (74 of 132 index files)
2025-08-20T12:26:22+02:00: marked 57% (76 of 132 index files)
2025-08-20T12:26:22+02:00: marked 58% (77 of 132 index files)
2025-08-20T12:26:22+02:00: marked 59% (78 of 132 index files)
2025-08-20T12:26:22+02:00: marked 60% (80 of 132 index files)
2025-08-20T12:26:22+02:00: marked 61% (81 of 132 index files)
2025-08-20T12:26:22+02:00: marked 62% (82 of 132 index files)
2025-08-20T12:26:22+02:00: marked 63% (84 of 132 index files)
2025-08-20T12:26:22+02:00: marked 64% (85 of 132 index files)
2025-08-20T12:26:22+02:00: marked 65% (86 of 132 index files)
2025-08-20T12:26:22+02:00: marked 66% (88 of 132 index files)
2025-08-20T12:26:22+02:00: marked 67% (89 of 132 index files)
2025-08-20T12:26:22+02:00: marked 68% (90 of 132 index files)
2025-08-20T12:26:22+02:00: marked 69% (92 of 132 index files)
2025-08-20T12:26:23+02:00: marked 70% (93 of 132 index files)
2025-08-20T12:26:23+02:00: marked 71% (94 of 132 index files)
2025-08-20T12:26:23+02:00: marked 72% (96 of 132 index files)
2025-08-20T12:26:23+02:00: marked 73% (97 of 132 index files)
2025-08-20T12:26:23+02:00: marked 74% (98 of 132 index files)
2025-08-20T12:26:23+02:00: marked 75% (99 of 132 index files)
2025-08-20T12:26:23+02:00: marked 76% (101 of 132 index files)
2025-08-20T12:26:23+02:00: marked 77% (102 of 132 index files)
2025-08-20T12:26:23+02:00: marked 78% (103 of 132 index files)
2025-08-20T12:26:23+02:00: marked 79% (105 of 132 index files)
2025-08-20T12:26:23+02:00: marked 80% (106 of 132 index files)
2025-08-20T12:26:23+02:00: marked 81% (107 of 132 index files)
2025-08-20T12:26:23+02:00: marked 82% (109 of 132 index files)
2025-08-20T12:26:23+02:00: marked 83% (110 of 132 index files)
2025-08-20T12:26:23+02:00: marked 84% (111 of 132 index files)
2025-08-20T12:26:23+02:00: marked 85% (113 of 132 index files)
2025-08-20T12:26:23+02:00: marked 86% (114 of 132 index files)
2025-08-20T12:26:23+02:00: marked 87% (115 of 132 index files)
2025-08-20T12:26:23+02:00: marked 88% (117 of 132 index files)
2025-08-20T12:26:23+02:00: marked 89% (118 of 132 index files)
2025-08-20T12:26:23+02:00: marked 90% (119 of 132 index files)
2025-08-20T12:26:23+02:00: marked 91% (121 of 132 index files)
2025-08-20T12:26:23+02:00: marked 92% (122 of 132 index files)
2025-08-20T12:26:23+02:00: marked 93% (123 of 132 index files)
2025-08-20T12:26:23+02:00: marked 94% (125 of 132 index files)
2025-08-20T12:26:23+02:00: marked 95% (126 of 132 index files)
2025-08-20T12:26:23+02:00: marked 96% (127 of 132 index files)
2025-08-20T12:26:23+02:00: marked 97% (129 of 132 index files)
2025-08-20T12:26:23+02:00: marked 98% (130 of 132 index files)
2025-08-20T12:26:23+02:00: marked 99% (131 of 132 index files)
2025-08-20T12:26:23+02:00: marked 100% (132 of 132 index files)
2025-08-20T12:26:23+02:00: Start GC phase2 (sweep unused chunks)
2025-08-20T12:27:23+02:00: queued notification (id=48bec03d-2dac-4ee0-a27f-6905535a5415)
2025-08-20T12:27:23+02:00: TASK ERROR: failed to list chunk in s3 object store: request timeout: deadline has elapsed

Is this because of the limitation specified here https://rclone.org/commands/rclone_serve_s3/#limitations
When using ListObjects, rclone will use / when the delimiter is empty. This reduces backend requests with no effect on most operations, but if the delimiter is something other than / and empty, rclone will do a full recursive search of the backend, which can take some time.


Can we perhaps increase the timeout?
Hi, yes the PBS s3 client does a ListObjectsV2 call to the api with a prefix of /<store-name>/.chunks, in order to list chunk objects only. This list api call has currently a timeout of 60 seconds.
But I see rclone serve is a still experimental serving of a remote storage via an s3 api. So this probably depends on a lot of factors, simply bumping the timeout might work for some cases but not for all.

Can you please open a dedicated issue at https://bugzilla.proxmox.com for this so it get's property tracked and link to this thread, so we can evaluate this further.

Further, if possible please try to identify how many chunk objects you have in the datastore, how long such a listing will take (e.g. by listing the contents via some other s3 client), and provide more detailed information about what the remote storage being served via rclone serve is (disks, filesystem, protocol, ecc).
 
Hey,

I'm actually having the same issue, but with Minio. On my side, listing the content from mc cli (cli client to query minio or other s3), took well above 60s, at around 10min.

Is there anyway to override that timeout ?

Code:
mc ls -r minio-internal/pbs  1.90s user 0.73s system 0% cpu 9:49.95 total

This is similar to what is described here, but another occurence.

https://forum.proxmox.com/threads/rclone-serve-s3-gc-fails-deadline-has-elapsed.170037/
 
Is there anyway to override that timeout ?
No, this is currently not configurable, but we will look into what makes most sense here.

Are you using a self hosted MinIO instance? And if so, what disk and filesystem are you using?

But more general: Please consider using local fast disks as your primary backup storage, the S3 integration makes most sense for long term and offsite backups. It does not make much sense to use a not so performant, locally, self hosted S3 object store when you can have much better performance by other means. A regular network share will most likely outperform your s3 object store.
 
Yes indeed, self-hosted.
It's an HDD with it.

I'll try switching to some SSD/NVME and local then. I was looking into a way of replicating that Backup, but I guess I could use the Minio DataStore to sync with that local stoarge, in case of disk failure ? Would I have the same timeout issues ?

Thanks for your help
 
I'll try switching to some SSD/NVME and local then. I was looking into a way of replicating that Backup, but I guess I could use the Minio DataStore to sync with that local stoarge, in case of disk failure ? Would I have the same timeout issues ?
Not sure if I fully understood your question, but a sync job to a removable or another local (no s3 involved) datastore might already be enough for you to have backup replication? And most likely more performant as well.. Or sync to a second, remote PBS instance?
 
i have an off-site local pbs backup and thus want to addtitionally sync that to my s3 storage.

I already created a post at the rclone forums ( you can see here https://forum.rclone.org/t/rclone-serve-s3-listobjects-timeout/52304/3)

I tried to do the ListObjects Call with another s3 clients and it is super fast with those. So i enabled debug logging and saw that the only difference is that the fast clients is sending a delimiter "/", which pbs is not sending.

Can you check, additionally to manually let us change the timeout?
 
i have an off-site local pbs backup and thus want to addtitionally sync that to my s3 storage.

I already created a post at the rclone forums ( you can see here https://forum.rclone.org/t/rclone-serve-s3-listobjects-timeout/52304/3)

I tried to do the ListObjects Call with another s3 clients and it is super fast with those. So i enabled debug logging and saw that the only difference is that the fast clients is sending a delimiter "/", which pbs is not sending.
Thanks, this is an interesting finding! I will have a look if simply sending the delimiter as well is an option here, other s3 compatible providers should support this as well.

Can you check, additionally to manually let us change the timeout?
No, exposing the timeout as configuration option is not the best approach here. After all, given this low timeout we could identify the possible speedup by setting the delimiter?
 
Well, now that I checked the implications of setting the delimiter, I see that this is not feasible... The delimiter leads to grouping of object keys being identical after the prefix up to the first encounter of the delimiter into a common prefix in the result. so this does not work as the PBS client needs the full chunk object list. That does also explains the significant speed difference in listing. We might reconsider bumping the request timeout after all, but this requires some evaluation in how much this should actually be, considering a slow storage device backing s3.

For the time being using the VFS with caching should be the way to go when using rclone, that significantly speeds up the listing.
 
sadly caching does not work for rclone, as it does not return the list within the first 60 secs....
 
What is the rclone remote you are serving via s3?

Independent from the request timeout issue:
I would suggest to reconsider your setup in general, as if the listing is already that slow, other operations might be even slower... E.g. what about sync or restore speeds?
 
  • Like
Reactions: Johannes S
strangely sync is super fast, also verify and all other operations. Only the GC has issues...
 
Likely because the chunk are still cached locally, but once the cached contents are no longer available, they will have to be refetched. Nonesense, the verify task does bypass the cache. So the issue is the listing of the chunks during GC only
 
Last edited:
What is the rclone remote you are serving via s3?
can you provide more information here? Also, how long does a recursive listing of the chunk folder contents take when using your alternative s3 client? not just the top level folder listing...

Edit: Would be nice to have some hard number here for evaluation. Also, the number of actual chunk objects is of interest
 
Last edited:
Not sure if I fully understood your question, but a sync job to a removable or another local (no s3 involved) datastore might already be enough for you to have backup replication? And most likely more performant as well.. Or sync to a second, remote PBS instance?
I meant my only remote backup option is s3, hence, want ed to sync my pbs there, even if only weekly.