Hi,
PBS works by uploading only changed blocks as fixed size chunks (in case of VMs) or changed fragments of data as variable sized chunks (in case of LXCs and host backups). Therefore, de-duplication is performed by re-referencing already existing chunks of the datastore in the backups index file.
Where do you see this? PBS doesn`t know how big your backups are, so all sizes shown in the PBS webUI are just the size of the raw data before deduplication/compression and not how much capacity those backup snapshots actually consume of your datastore. If you want to roughly know how much space a backup snapshot consumes, have a look at the backup logs.
There is also the "Deduplication Factor" on the summary page for each datastore that gives you a hint as to how much space is saved by dedup.
Note: You have to run a garbage collection on the store for this to update.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.