Hi!
I some specific questions.
I am testing out PBS for a specific workflow consisiting of backing up debian hosted NFS server nearly 1 PB total.
i want to know if i have to invest a little or many money for that.
What i see is that i only achieve nearly 1.2 ~ 1.3 deduplication factor for around 1TB of data ( no incremental )
1) does my deduplication factor will increase as the volume of data backed up increase ? i am not talking about incremental backup in this case ( like if there is a better probability for the chunk to exist on multiple place )
2) Is there some kind of mathematical rule for the dedup factor value depending on the volume of data backed up , like if no CPU or RAM constrain are taking in count. ( for different set of data again, no incremental)
3) does the filesystem have an impact on my deduplication factor, i would say no as PBS do not use the ZFS deduplication options, but never know
3) What reasonable deduplication factor can i hope to have with 1PB of similar file format, mostly .EXR ?
I some specific questions.
I am testing out PBS for a specific workflow consisiting of backing up debian hosted NFS server nearly 1 PB total.
i want to know if i have to invest a little or many money for that.
What i see is that i only achieve nearly 1.2 ~ 1.3 deduplication factor for around 1TB of data ( no incremental )
1) does my deduplication factor will increase as the volume of data backed up increase ? i am not talking about incremental backup in this case ( like if there is a better probability for the chunk to exist on multiple place )
2) Is there some kind of mathematical rule for the dedup factor value depending on the volume of data backed up , like if no CPU or RAM constrain are taking in count. ( for different set of data again, no incremental)
3) does the filesystem have an impact on my deduplication factor, i would say no as PBS do not use the ZFS deduplication options, but never know
3) What reasonable deduplication factor can i hope to have with 1PB of similar file format, mostly .EXR ?