I'm not an expert on PBS disk performance profiling, and I have not seen any official recommendations on what FIO tests to run to model real-world PBS performance, so I can't really comment on that.
I do have some experience with bulk storage and this seems slightly absurd:
If you want to test the max possible single-VM operation speeds with your current hardware (CPU/memory/network), I would suggest using a pair of fast enterprise NVMe drives in a mirrored configuration, either using mdadm or ZFS, then test backup/restore using that as your storage storage.
For bulk storage at that scale, I would also look at running a (or several) ZFS pool(s) with multiple RAIDZ2 vdevs with harddrives, and also (this is the important bit) use metadata AND slog devices. You must use several mirrored metadata devices in a configuration like that, and they must be high-quality/high-endurance enterprise SSDs. SLOG has less stringent requirements.
I do have some experience with bulk storage and this seems slightly absurd:
Considering how PBS divides up data in chunks as part of deduplication, this seems like a problematic configuration if you want performance.Drive layout is raid-5 managed by a hardware controller. It has 23 drives in the configuration.
If you want to test the max possible single-VM operation speeds with your current hardware (CPU/memory/network), I would suggest using a pair of fast enterprise NVMe drives in a mirrored configuration, either using mdadm or ZFS, then test backup/restore using that as your storage storage.
For bulk storage at that scale, I would also look at running a (or several) ZFS pool(s) with multiple RAIDZ2 vdevs with harddrives, and also (this is the important bit) use metadata AND slog devices. You must use several mirrored metadata devices in a configuration like that, and they must be high-quality/high-endurance enterprise SSDs. SLOG has less stringent requirements.
Last edited: