Min/Maxing EBS Volumes on an AWS backup server

Dec 8, 2021
25
0
6
32
USA
In a previous post, I discussed how an St1 Volume backs our remote backups on EC2. As we rapidly reached the Maximum size of an ST1 EBS Volume. ZFS has entered to save the day with v2.3 and the ability to quickly and easily add disks to zraids.

On paper, an array of 4 8TB ST1 volumes should vastly outperform our existing 16TB ST1 volume. But this begs the question, what about 7 SC1 Volumes? Has anyone tried this configuration? what are your experiences? It seems like populating the list of backups is dependent on storage speed. Has that been an issue for anyone?

It is likely we will move to S3 when that leaves Preview. If you would like to share your experience with that, feel free, but I'm looking for input from you penny pinchers. Is the performance hit Worth $1000 a month?
 
For some extra Data, I had AI throw this together for me:

Clearly this is best case senario, sequential reads and writes. and does not take into account overhead from ZFS.

Note: maximum EBS throughput is 1,250 MiB/s for the instance
OptionPer-vol (MiB/s) base / burstArray baseline (MiB/s)Array burst (MiB/s)Theoretical burst durationEff. baseline (MiB/s)Eff. burst (MiB/s)Effective burst duration
4 × 8 TiB ST1320 / 500 (each)1,2802,0009.29 h (≈9:17)1,192.09 (instance cap)1,192.09 (instance cap)0:00 — no headroom (instance is bottleneck)
7 × 4 TiB ST1160 / 500 (each)1,1203,5003.43 h (≈3:26)1,1201,192.09113.1 h (≈113:06)
4 × 8 TiB SC196 / 250 (each)3841,00015.13 h (≈15:08)3841,00015.13 h (≈15:08)
7 × 4 TiB SC148 / 250 (each)3361,75023.79 h (≈23:47)3361,192.0933.67 h (≈33:40)
 
Last edited: