Hi,
With PBS 3.4 I'm now testing restore from LTO-9 speed.
I first benchmarched using multiple
cat /dev/urandom > /mnt/datastore/xxx/yyy
our three datastore (sdc, sdd, sde) in parallel, write speed results were from
iostat -xmt 10 sdc sdd sde
about 2 Gbyte/s (> 7 TB/hour) :
Code:
Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
sdc 0.00 0.00 0.00 0.00 0.00 0.00 592.20 669.58 15.30 2.52 41.76 1157.80 0.00 0.00 0.00 0.00 0.00 0.00 0.40 47.25 24.75 76.52
sdd 0.00 0.00 0.00 0.00 0.00 0.00 540.10 610.81 11.80 2.14 48.51 1158.06 0.00 0.00 0.00 0.00 0.00 0.00 0.40 36.75 26.22 70.61
sde 0.00 0.00 0.00 0.00 0.00 0.00 683.60 716.32 16.40 2.34 42.44 1073.01 0.00 0.00 0.00 0.00 0.00 0.00 0.40 39.00 29.03 51.50
Read or write from our LTO-9 tape library (HPE MSL3040) is about 300 MByte/s per drive, we have three drives.
Code:
2025-04-22T09:44:38+02:00: wrote 1440 chunks (4298.38 MB at 305.10 MB/s)
2025-04-22T09:44:52+02:00: wrote 1343 chunks (4296.80 MB at 305.20 MB/s)
2025-04-22T09:45:06+02:00: wrote 1170 chunks (4296.54 MB at 304.68 MB/s)
2025-04-22T09:45:20+02:00: wrote 1256 chunks (4298.38 MB at 304.09 MB/s)
Code:
2025-04-20T14:02:58+02:00: register 1083 chunks
2025-04-20T14:02:58+02:00: File 201: chunk archive for datastore 'datastore1'
2025-04-20T14:03:12+02:00: restored 4.299 GB (304.04 MB/s)
When I launched restore in parallel of three LTO-9 tapes - one per drive - to one datastore (sdc) I got a total output of about 1.6 TB/hour
When I launched restore in parallel of the same 3 tapes but this time first tape to first datastore (sdc), second tape to second datastore (sdd), third tape to third datastore (sde) I got a total output of about 2.5 TB/hour, so 56% higher.
tape-job now have a "worker-threads 8" option but there is no equivalent option for tape-restore jobs AFAIK, may be it would be useful and gain some LTO-9 to datastore write performance?
Thanks!
PS: for the record sdc sdd sde are ext4 formatted and the underlying storage for each is an ec42 ceph pool (56 SSD over 7 hosts, 4x10Gbit/s networking)