Hi everyone,
I'd like to know how you handle backup optimization and deduplication. Is there some golden rule for that?
Currently, I optimize my VMs once a month with cleaning the harddisk of temporary files and zeroing the filesystem for minimal harddisk footprint. Some linux VMs are also equipped with virtio-scsi-adapter to use fstrim/discard, but I use clustered LVM, so there is no real space saving benefit of this yet. It is planned to use gfs for cluster filesystem, but I had no time to try this in my test cluster environment.
I plan to backup my machines without compression and write it to a volume with internal deduplication (ZFS or OpenDedup) over network. Any suggestions on software (e.g. FreeBSD ZFS over Linux-ZFS, etc.)? I know that I need at least 4 GB of RAM per 1 TB of backup storage with a 4k block size for deduplication - depending on the used software.
Best,
LnxBil
I'd like to know how you handle backup optimization and deduplication. Is there some golden rule for that?
Currently, I optimize my VMs once a month with cleaning the harddisk of temporary files and zeroing the filesystem for minimal harddisk footprint. Some linux VMs are also equipped with virtio-scsi-adapter to use fstrim/discard, but I use clustered LVM, so there is no real space saving benefit of this yet. It is planned to use gfs for cluster filesystem, but I had no time to try this in my test cluster environment.
I plan to backup my machines without compression and write it to a volume with internal deduplication (ZFS or OpenDedup) over network. Any suggestions on software (e.g. FreeBSD ZFS over Linux-ZFS, etc.)? I know that I need at least 4 GB of RAM per 1 TB of backup storage with a 4k block size for deduplication - depending on the used software.
Best,
LnxBil