Verify jobs on zfs backed datastore

Sep 17, 2024
46
11
8
Just crossed my mind: how useful are verify jobs on PBS when using zfs datastore?

I mean the zfs itself checks the integrity of data on all operations and automatic scrub is either enabled by default or very easy to setup so the verify job concept seems rather unnecessary to me. I could be missing some information here though, please correct me if I'm wrong.
 
Running "verify" is not mandatory. It is an option to make sure the integrity of the backups is not damaged. Data is actually read and found to be intact by the PBS software on the higher level.

Yes, ZFS will not deliver bad data. But if the underlying storage is damaged it delivers... no data at all but only an error. Maybe a "scrub" would fulfill the same quality of test on the lower level.

If you run PBS on ZFS with redundancy it would probably be fine to run verify just once in a blue moon. I do that only once or twice a year for my secondary/tertiary PBS.
 
While redundancy on your ZFS pool will protect you from data corruption on the block level, it will not detect if some chunks went missing because of e.g. external interaction with the datastore or expose unrecoverable data errors from the ZFS to the PBS (so these chunks can be mark as bad).
But in general yes, ZFS will reduce the need for such jobs.
 
  • Like
Reactions: AmateurProgrammer