Hi every one.
I’m building a dedicated Proxmox Backup Server and would appreciate your feedback on the best ZFS layout for my hardware. My primary goals are high random-I/O performance (for garbage collection and small writes), robust data integrity, and reasonable capacity.
Hardware
Cons: Slow resilver on 16 TB drives
Cons: Fixed stripe width may hurt small-file efficiency, only 2 failures tolerated
Thanks in advance.
I’m building a dedicated Proxmox Backup Server and would appreciate your feedback on the best ZFS layout for my hardware. My primary goals are high random-I/O performance (for garbage collection and small writes), robust data integrity, and reasonable capacity.
Hardware
- 22 × 16 TB HDDs (2 reserved as hot spares)
- 2 × 3.84 TB MU NVMe (to be used as a mirrored special-metadata vdev)
- 2 × 480 GB RI NVMe (mirrored OS boot; remaining partitions for SLOG)
Option A: Traditional RAIDZ2
- Data pool: 2 × 10-drive RAIDZ2 vdevs
- Hot spares: 2 HDDs as global spares
- Performance vdevs:
– Special metadata: mirror of 3.84 TB NVMes
– SLOG: mirror of leftover 480 GB NVMe partitions
Cons: Slow resilver on 16 TB drives
Option B: dRAID2
- Data pool: single draid2:10d:22c:2s vdev (10 data + 2 parity per stripe, 2 distributed spares)
- Performance vdevs: same NVMe mirrors as in Option A
Cons: Fixed stripe width may hurt small-file efficiency, only 2 failures tolerated
My Questions:
- Random I/O & Throughput: Which option gives better IOPS/throughput for PBS workloads?
- Resilver Time & Risk: Real-world resilver times on 16 TB drives—does dRAID’s speed justify its lower failure tolerance?
- Capacity Efficiency: Post-parity usable space difference between configurations?
- NVMe Metadata/SLOG: Does using a special vdev and SLOG make the HDD layout choice less critical?
- Complexity vs. Expansion: For a fixed, large pool, is dRAID worth the added complexity over RAIDZ2?
Thanks in advance.