increase default chunk size in PBS for better rclone-uploads

Peter123

Well-Known Member
Mar 5, 2018
50
3
48
54
We are in the process of testing PBS (great product: thanks for that!).

It is important for us that we can then load the backup data from the storage to a cloud storage via rclone.

It turned out that due to the relatively small chunk size, the upload to cloud storage providers is only loaded at a fraction of the possible upload rate. The fastest is the upload to Wasabi with approx. 40 MBytes / s, the longest is to GDrive, where a maximum of 2 (or 3?) files can be uploaded per second and therefore the average upload is 3 MBytes / s .

With files of several GBytes in size, we always achieve between 80 and 120 MBytes / s in practice in the upload.

Is there a way to increase the standard chunk size in the PBS (at least e.g. to 10 MBytes), so that the later rclone upload would also be many times higher?
 
This bothers me as well a simple 60gb backup already creates 65,000 directories with over 133,000 files.

Due to the high file count bandwidth is limited by iops which for hdd's will result in terrible performance.

On the other hand they won't increase chunk size as this will result in exponential growth due to how deduplication works.


Instead of syncing the pbs chunks directly use zfs as backup storage and send incremental snapshots.

This way sync happens on block level and takes seconds instead of hours.

There are projects to store zfs snapshots on gdrive, s3 etc. as well.
 
This bothers me as well a simple 60gb backup already creates 65,000 directories with over 133,000 files.

Due to the high file count bandwidth is limited by iops which for hdd's will result in terrible performance.

if you are using ZFS, a special vdev pair with fast SSDs/NVME devices will alleviate that. alternatively, using relatime instead of atime also helps a bit.

On the other hand they won't increase chunk size as this will result in exponential growth due to how deduplication works.

yes, the current settings where chosen as a kind of 'sweet spot' between speed/performance and efficiency/deduplication.
 
And what would be the suggestion for setting up a cloud backup with rclone for the PBS datastore?

We have found out that with almost all cloud storage providers, uploading these tens of thousands of small files leads to running into the quota limit in a short time (and, yes, we use our own API keys with Azure, GCS etc.)
 
Last edited:
would it be better to reference less chunks per backup, but upload more chunks/data in total because you cannot deduplicate anymore? I doubt that ;) but you can try yourself - nobody is stopping you from just storing the whole datastore, or each of the .chunks subdirs as individual objects instead of each chunk..
 
Unfortunately, I don't fully understand the suggestion: Are there several ways to save the backups in the PBS in a different location than each individual chunk in the sub-folders?
 
Unfortunately, I don't fully understand the suggestion: Are there several ways to save the backups in the PBS in a different location than each individual chunk in the sub-folders?
if you want to move the .chunks folder off-site (which I assumed is what you want to do with rclone?) you can do so as-is (which you don't like, because you end up having lots of files == objects), or you can transform it - e.g., make each subdir of the .chunks directory a tar or zip archive and put that into the cloud as one object each, or take groups of 5 chunks as one archive file/object, or even the full .chunks dir as a single object. you'll then see that having bigger objects just skews the balance in the other direction - you will now have hardly any deduplication anymore, and need to transfer a lot of data at each sync instead of just for the initial one...

there is currently no alternative datastore format built-in. there is an issue tracking S3 integration, but work on that has not started yet.
 
I'd be curious what would be the performance of using restic instead of rclone to move the PBS repository to cloud storage.
restic would not find anything to deduplicate but IMHO the "chunk" size is configurable in restic, meaning less but bigger "files" uploaded to the cloud storage...
 
  • Like
Reactions: Jarvar
if you want to move the .chunks folder off-site (which I assumed is what you want to do with rclone?) you can do so as-is (which you don't like, because you end up having lots of files == objects), or you can transform it - e.g., make each subdir of the .chunks directory a tar or zip archive and put that into the cloud as one object each, or take groups of 5 chunks as one archive file/object, or even the full .chunks dir as a single object. you'll then see that having bigger objects just skews the balance in the other direction - you will now have hardly any deduplication anymore, and need to transfer a lot of data at each sync instead of just for the initial one...

there is currently no alternative datastore format built-in. there is an issue tracking S3 integration, but work on that has not started yet.
I understand that the main focus of the PBS is not on deduplicating in the WAN but in the LAN. This is ONE approach to backups, but does not correspond to our current "philosophy" of it.

For us, backups HAVE to be decentralized nowadays and transferred to one of the large cloud storage providers. There they have enough skilled employees and planned scenarios to cover the worst-case scenario.

Yes, and I don't think that small businesses can do this just as well in doubt, even if RAID and clusters are used locally.

So I find the hidden criticism of our approach (whether that's what we want?) to be, let's say, "presumptuous" and out of place. So be it: everyone has its own opinion.

I summarize: the PBS is currently not suitable for creating backups that can be easily uploaded to a cloud storage provider with on-board means.

Unfortunately.
 
Last edited:
sorry if my response came across as antagonistic - that was not the intent. we made a careful choice of chunk size to balance deduplication, index sizes, metadata access patterns and number of chunks. most of that carries over to object storages on the WAN as well (e.g., AWS S3 starts throttling at 3.5k write requests per second per prefix, so with the same prefix scheme we use for the local datastores that would be 2^16 x 3500 x 4M = 875TB/s of logical writes! even if all the chunks would end up in a single prefix that's still 13.5GB/s, and we could use a longer prefix to ensure that we always hit multiple prefixes in a single operation). so the problem is not the chunk size, but either how rclone translates the chunk structure to object requests, or your object/cloud storage provider having too low limits (or both).

like I said, taking a closer look at S3 (the API/protocol, not the AWS product ;)) integration (at least as a sync target, possibly also as regular datastore backend) is on our agenda. increasing the chunk size is not the way to go to tackle this issue (neither is decreasing to improve deduplication efficiency, as the overhead of tracking orders of magnitude more chunks adds up fast!).
 
like I said, taking a closer look at S3 (the API/protocol, not the AWS product ;)) integration (at least as a sync target, possibly also as regular datastore backend) is on our agenda. increasing the chunk size is not the way to go to tackle this issue (neither is decreasing to improve deduplication efficiency, as the overhead of tracking orders of magnitude more chunks adds up fast!).
I would love to see you setting up simulated-tape blobs which could be streamed/sent to these Glacier/CloudArchive (like OVH's https://www.ovhcloud.com/en-gb/public-cloud/cloud-archive/#how-to ) solutions. That way it might be "simpler" to know the "tapes"/blobs to be pulled (ie. to be requested/defrosted/thawed and then downloaded) than all the .chunks?

If those simulated tape-blobs are ie. dumped into a directory/storage, then a webhook/script be called, to start streaming, that script/hook could for example delete/remove the blob and signal the PBS to generate the next blob (or PBS could monitor that directory for available space before creating the next one). That way community/etc. could setup our own mechanisms for wherever we want/need to sent/copy those simulated-tape blobs.
 
Last edited:
Curious as to whether this has worked out? Would the recommendation to rclone the PBS store? Since it's already split into chunks and keep the folders intact or to use Restic on top of rclone in order to maintain security?
I've been running restic but it seems to run on forever, even after completing it once.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!