Hello.
Seems the job for prune & GC does not account for data generated by concurrent backups.
In my case GC takes 17 days (on 24 16TB SAS disks configured as a single pool containing 3 RAIDZ3 - it's simply economically unfeasible to have the same amount of storage on SSD just for backups!) and all the backups run during that interval are corrupted (IIUC GC removes some chunks that they see as still available).
Even worse: backups run AFTER the GC completes are often corrupted, too. Unless I delete the last corrupted backup.
Please consider having an option to completely avoid chunking, leaving only ZFS block-level deduplication (IIF the user keeps it enabled... I really don't like the idea of deduplicating a thing that gets done exactly to have redundancy!). Call it "pseudo-tape mode"?
Seems the job for prune & GC does not account for data generated by concurrent backups.
In my case GC takes 17 days (on 24 16TB SAS disks configured as a single pool containing 3 RAIDZ3 - it's simply economically unfeasible to have the same amount of storage on SSD just for backups!) and all the backups run during that interval are corrupted (IIUC GC removes some chunks that they see as still available).
Even worse: backups run AFTER the GC completes are often corrupted, too. Unless I delete the last corrupted backup.
Please consider having an option to completely avoid chunking, leaving only ZFS block-level deduplication (IIF the user keeps it enabled... I really don't like the idea of deduplicating a thing that gets done exactly to have redundancy!). Call it "pseudo-tape mode"?