It'd be nice to have something to highlight which backups are changing the most and consuming space in the datastore. I've seen a few forum posts that discuss this already and I understand the limitations of processing this on-demand.
Regardless of the deduplication, it'd be nice if there was somewhere in the UI where you could list or filter backups by how much data they transferred on their last run.
I understand that dedupe reuses chunks wherever possible, but it'd still be really useful to see which backups are adding the most data on each run - this at least gives you somewhere to potentially filter down and see whats going on / add exclusions if applicable
Having to go and fetch the logs from each of my hypervisors for every time the backup runs and then parsing them is quite a bit of a chore - Ignoring a "first backup", does the backup server actually know how much data has been transferred from the client? Even if the data transferred ultimately gets deduped down, being able to see which backup jobs are sending the most "new" data (or what the client thinks is new, at least) to the server historically would be really useful with actually managing backups.
Even if this data ends up being deduped, chances are a backup job that's historically, repeatedly sending larger than expected amounts of data suggests there are a lot of changes being made between runs - whilst this may be expected for a busy server, it would still be useful to see this information to make an informed decision about backup policies/schedules and excludes for specific containers
Regardless of the deduplication, it'd be nice if there was somewhere in the UI where you could list or filter backups by how much data they transferred on their last run.
I understand that dedupe reuses chunks wherever possible, but it'd still be really useful to see which backups are adding the most data on each run - this at least gives you somewhere to potentially filter down and see whats going on / add exclusions if applicable
Having to go and fetch the logs from each of my hypervisors for every time the backup runs and then parsing them is quite a bit of a chore - Ignoring a "first backup", does the backup server actually know how much data has been transferred from the client? Even if the data transferred ultimately gets deduped down, being able to see which backup jobs are sending the most "new" data (or what the client thinks is new, at least) to the server historically would be really useful with actually managing backups.
Even if this data ends up being deduped, chances are a backup job that's historically, repeatedly sending larger than expected amounts of data suggests there are a lot of changes being made between runs - whilst this may be expected for a busy server, it would still be useful to see this information to make an informed decision about backup policies/schedules and excludes for specific containers