PBS on S3 / Local chunks clarifications

jaysee2607

Member
Aug 1, 2023
8
0
6
Hello,

trying to validate migration from local to S3 using PBS-4.2, I need some clarifications about local .chunks that stay on PBS server.

After a backup of 6TB, the .chunk directory of the datastore have 1.7TB.

I understand some caching needs, but,this is far too much... I thought ony indexes will be cached, not chunks.

If really requiered, this could be a great feature to be able to limit the size of the local chunks. My goal is to use a small disque server to allow large backups on S3, actually not possible.

regards
 
Hi,
the local datastore cache also caches chunks, not just metadata files. It is greedy and consumes as much space as is available to it on the filesystem it resides on. If you want to limit it's usage, the recommendation is to use a dedicated filesystem/partition/disk or to set quotas. See https://pbs.proxmox.com/docs/storage.html#datastores-with-s3-backend.

However, I do agree that it makes sense to add some tuning knob to make this configurable in future versions of PBS, please open a feature request for this at https://bugzilla.proxmox.com
 
Thanks for pointing th edocumentation about it.

I just tryed using a loop mount using a 3GB image disk, but this fails:
Code:
INFO:  15% (1.1 GiB of 7.0 GiB) in 3s, read: 368.0 MiB/s, write: 362.7 MiB/s
INFO:  20% (1.4 GiB of 7.0 GiB) in 6s, read: 122.7 MiB/s, write: 122.7 MiB/s
INFO:  20% (1.4 GiB of 7.0 GiB) in 40s, read: 0 B/s, write: 0 B/s
ERROR: backup write data failed: command error: write_data upload error: pipelined request failed: inserting chunk on store 'TestS3' failed for f6d7f16eaf37671e2558f557a7f52c2dfee1240a1a55767d4aebc513fdca1d54 - write failed: No space left on device (os error 28)
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 101 failed - backup write data failed: command error: write_data upload error: pipelined request failed: inserting chunk on store 'TestS3' failed for f6d7f16eaf37671e2558f557a7f52c2dfee1240a1a55767d4aebc513fdca1d54 - write failed: No space left on device (os error 28)
INFO: Failed at 2026-04-30 14:35:37
INFO: Backup job finished with errors

3GB is just for testing, I can assume this should lead to poor performance, but it fails...
 
How did you setup the local cache? Did you create a new datastore on it? Please share the output of df -h, available cache slots are calculated based on available storage space during datastore instantiation, so you must at least put the datastore into maintenance-mode offline first. Otherwise the cache slots are not re-calculated.
 
Last edited:
I just created a fresh datastore, put it in maintenance mode,
copy the .chunks dir contents to an image disc using rsync, move original .chunks to .chunks.ori, mounted my image disc, chown backup:backup the freshly mounted dir, put datastore out of maintenance mode (btw, how to get out of maintenance mode using cli?), then try to backup some CT/VM on it

Code:
df -h
/dev/loop0                                                              2.9G  2.8G   28K 100% /home/backup/pbs/test-s3/.chunks
 
Ok, I understand some more :)

Just retry but puting the whole datastore in the image disc (not only the .chunk dir), then is seem work, using a 3GB image disque achieve a 24GB of data backup, resulting in 5GB S3 usage.

Thanks
 
Yes, the datastore is used as base path, not the chunk store path. Note however that the local datastore cache is expected to be persistent.
 
Last edited: