I have a case where the latency for a cross continent synchronization, is being slowed down by the "small" requests from the pulling PBS. If I start up multiple synchronization jobs for multiple backups groups, I do get the full (expected) bandwidth, but when it's "stuck" on a single group's synchonization, it slows down as the plethora of serialized HTTPS:// requests takes time over the latency link to ramp up, and never reach full speed.
Q1: Is there a setting somewhere in the configs etc. to make a single group be doing multiple requests in parallel for it's blocks to pull down?
Q2a: is there a need for something like this from others before I log a feature request?
Q2b: IS this something that would be easy to implement in the back-end, or would it be infeasible?
The problem I am experiencing, is not the daily synchronizations, but when there had been a network outage or other problems, the synchronization takes very long to catch up without intervention if at all and the only reason is the serialized HTTPS:// requests, not the available bandwidth nor the actual backup sizes
Q1: Is there a setting somewhere in the configs etc. to make a single group be doing multiple requests in parallel for it's blocks to pull down?
Q2a: is there a need for something like this from others before I log a feature request?
Q2b: IS this something that would be easy to implement in the back-end, or would it be infeasible?
The problem I am experiencing, is not the daily synchronizations, but when there had been a network outage or other problems, the synchronization takes very long to catch up without intervention if at all and the only reason is the serialized HTTPS:// requests, not the available bandwidth nor the actual backup sizes