We are in the process of testing PBS (great product: thanks for that!).
It is important for us that we can then load the backup data from the storage to a cloud storage via rclone.
It turned out that due to the relatively small chunk size, the upload to cloud storage providers is only loaded at a fraction of the possible upload rate. The fastest is the upload to Wasabi with approx. 40 MBytes / s, the longest is to GDrive, where a maximum of 2 (or 3?) files can be uploaded per second and therefore the average upload is 3 MBytes / s .
With files of several GBytes in size, we always achieve between 80 and 120 MBytes / s in practice in the upload.
Is there a way to increase the standard chunk size in the PBS (at least e.g. to 10 MBytes), so that the later rclone upload would also be many times higher?
It is important for us that we can then load the backup data from the storage to a cloud storage via rclone.
It turned out that due to the relatively small chunk size, the upload to cloud storage providers is only loaded at a fraction of the possible upload rate. The fastest is the upload to Wasabi with approx. 40 MBytes / s, the longest is to GDrive, where a maximum of 2 (or 3?) files can be uploaded per second and therefore the average upload is 3 MBytes / s .
With files of several GBytes in size, we always achieve between 80 and 120 MBytes / s in practice in the upload.
Is there a way to increase the standard chunk size in the PBS (at least e.g. to 10 MBytes), so that the later rclone upload would also be many times higher?