hi there,
i've read quite a few posts about that pbs should be flash based due to the small chunk size and its effect on the throughput. I would agree however storing backups on flash is still quite an expensive proposition at the moment. Currently we deploy a backup server with spinning disks just as a cost exercise.
we have added a mirrored special vdev for all the meta data (which seems there is a load of with small chunk sizes)
with that in mind, and understanding the limitations we are not expecting to be doing multi gig throughput with disks, which is fine however we are currently seeing 20-30MB throughput when backing up to tape which seems low given our zpool layout:
pool: pBackups02
state: ONLINE
scan: resilvered 2.92M in 00:00:01 with 0 errors on Wed Dec 13 14:56:17 2023
remove: Removal of vdev 5 copied 52.3G in 0h5m, completed on Sat Dec 23 10:28:28 2023
4.80M memory used for removed device mappings
config:
NAME STATE READ WRITE CKSUM
pBackups02 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
scsi-35000c5005910e1ff ONLINE 0 0 0
scsi-35000c5005910e1f7 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
scsi-35000c50063840397 ONLINE 0 0 0
scsi-35000c5006376ce37 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
ata-WDC_WD40EFRX-68N32N0_WD-WCC7K7FA2S9R ONLINE 0 0 0
ata-ST4000DM004-2CV104_WFN3QGLT ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
ata-ST4000DM004-2CV104_WFN3QGKJ ONLINE 0 0 0
ata-ST4000DM004-2CV104_WFN3QGC2 ONLINE 0 0 0
mirror-7 ONLINE 0 0 0
ata-WDC_WD5003ABYX-01WERA0_WD-WMAYP1064952 ONLINE 0 0 0
ata-WDC_WD5003ABYX-01WERA0_WD-WMAYP1019006 ONLINE 0 0 0
mirror-8 ONLINE 0 0 0
ata-WDC_WD5003ABYX-01WERA0_WD-WMAYP1067865 ONLINE 0 0 0
ata-WDC_WD5003ABYX-01WERA0_WD-WMAYP0432075 ONLINE 0 0 0
special
mirror-6 ONLINE 0 0 0
ata-M4-CT128M4SSD2_000000001245091BD8CA ONLINE 0 0 0
ata-TS120GSSD220S_F790422999 ONLINE 0 0 0
We seem to observe a few interesting things:
when doing proxmox backups we can achieve about 60-70-90 MB of throughput when doing local things. Not high but good enough.
when we are doing host based backups to tape we only see 20mb. Part of me is considering this is due to the dynamic chunk size potentially?
part of me is thinking that with a raid10 with a 6 vdevs we really should possibly be able to achieve more than just 20mb of throughput, or what am I not seeing here?
i've read quite a few posts about that pbs should be flash based due to the small chunk size and its effect on the throughput. I would agree however storing backups on flash is still quite an expensive proposition at the moment. Currently we deploy a backup server with spinning disks just as a cost exercise.
we have added a mirrored special vdev for all the meta data (which seems there is a load of with small chunk sizes)
with that in mind, and understanding the limitations we are not expecting to be doing multi gig throughput with disks, which is fine however we are currently seeing 20-30MB throughput when backing up to tape which seems low given our zpool layout:
pool: pBackups02
state: ONLINE
scan: resilvered 2.92M in 00:00:01 with 0 errors on Wed Dec 13 14:56:17 2023
remove: Removal of vdev 5 copied 52.3G in 0h5m, completed on Sat Dec 23 10:28:28 2023
4.80M memory used for removed device mappings
config:
NAME STATE READ WRITE CKSUM
pBackups02 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
scsi-35000c5005910e1ff ONLINE 0 0 0
scsi-35000c5005910e1f7 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
scsi-35000c50063840397 ONLINE 0 0 0
scsi-35000c5006376ce37 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
ata-WDC_WD40EFRX-68N32N0_WD-WCC7K7FA2S9R ONLINE 0 0 0
ata-ST4000DM004-2CV104_WFN3QGLT ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
ata-ST4000DM004-2CV104_WFN3QGKJ ONLINE 0 0 0
ata-ST4000DM004-2CV104_WFN3QGC2 ONLINE 0 0 0
mirror-7 ONLINE 0 0 0
ata-WDC_WD5003ABYX-01WERA0_WD-WMAYP1064952 ONLINE 0 0 0
ata-WDC_WD5003ABYX-01WERA0_WD-WMAYP1019006 ONLINE 0 0 0
mirror-8 ONLINE 0 0 0
ata-WDC_WD5003ABYX-01WERA0_WD-WMAYP1067865 ONLINE 0 0 0
ata-WDC_WD5003ABYX-01WERA0_WD-WMAYP0432075 ONLINE 0 0 0
special
mirror-6 ONLINE 0 0 0
ata-M4-CT128M4SSD2_000000001245091BD8CA ONLINE 0 0 0
ata-TS120GSSD220S_F790422999 ONLINE 0 0 0
We seem to observe a few interesting things:
when doing proxmox backups we can achieve about 60-70-90 MB of throughput when doing local things. Not high but good enough.
when we are doing host based backups to tape we only see 20mb. Part of me is considering this is due to the dynamic chunk size potentially?
part of me is thinking that with a raid10 with a 6 vdevs we really should possibly be able to achieve more than just 20mb of throughput, or what am I not seeing here?