[SOLVED] Sync to two PBS, different space used

Klug

Well-Known Member
Jul 24, 2019
80
5
48
53
Hello all.

We have a PVE cluster with about 35 TB used (Ceph), going well.

This is backed up to a datastore on a PBS (12 HD in ZRAID2 + special devices) that says it uses 35.64 TB (65 groups, 3906 snapshots, dedup is 34.69).
Smoothly too.

This datastore was synched to another PBS (12 HD in ZRAID3 + special devices) that needs to be decommissioned (2 of the HD just died).
Prune was set to "15 last" daily, GC daily.
On this one it said 13.74 TB used, 65 Groups, 975 Snapshot, dedup is 20.01.

I've setup a new PBS (12 HD in ZRAID2 + special devices) and started a sync of the initial PBS datastore (3 days ago, ongoing).
Prune is set to "2 last" (for the moment) daily, GC daily.
I'm currently see 22.77 TB used, 37 Groups, 93 Snapshots, dedup is 1.19 (only).

redundant_metadata is "all" on the two destination PBS (old one and new one)
recordsize is 128K on both PBS too.
These are actually the default parameters when the ZFS pool is created through PBS web interface.

I'm definitively missing something 8-)
Why is there such a storage size (and/or dedup) difference between the "old" sync and the "new" one?
 
Prune is set to "2 last" (for the moment) daily, GC daily.
Pruning or syncing? There is a "transfer last ___" in the Advanced settings of a sync job, to send only the last "n" backups of each VM/CT. Otherwise the sync will transfer all backups and then the GC and then pruning will take effect.
 
Pruning...
Good catch: I did not set the "transfert last", just changed it.
Sync stopped and restarted (it was in queue).
 
Last edited:
It ended up during the night
Prune/GC happened too and space used went from 25 to 13 TB (dedup: 4.43).
 
Last edited: