really slow restore from HDD pool

My obvious reason for leaving out 'small files' is that, isn't every PBS backup chunk a fixed-size file? I thought they were all fixed at 4mb? none smaller, none larger?
Yes, that's correct - as far as I understand.

And then compression is added. The result looks like this:
Code:
.chunks# ls -Al ffff/
total 49094
-rw-r--r-- 1 backup backup 2394832 Oct 20 10:50 ffff0208e856de0ee15d1e048578ec0db60a5ac0ccd3b636639bc32efad04176
-rw-r--r-- 1 backup backup 1667526 May 19 03:36 ffff0601afee6c539f451601015e3a405a4d0d36812e2f9360aee0314c851b13
-rw-r--r-- 1 backup backup 1866092 Sep 28 03:49 ffff08132bb9d0002f264d785f6ee424949c10bf09451679ea3cd2e2aa5e5dc6
-rw-r--r-- 1 backup backup  140717 Jul 27 11:11 ffff0e66180d74fba79d5095e2227c614fae066f5a31f9d4a792f7b6ed599d20
-rw-r--r-- 1 backup backup   30868 Oct 21  2022 ffff1ea2c12f074a049e0f5efce42decf3972ee7db3128176c42d6114f41ccc7
-rw-r--r-- 1 backup backup 1047460 Aug 31 12:31 ffff23d692a02da529d5adb6315d960bf5f625afb7536ed44378a4314348e9e0
-rw-r--r-- 1 backup backup  398832 Sep 22 07:44 ffff2ca487f2276e1fcbdcdf6a85fa83f097bedd7ab4c77af4ff500cc98004c3
-rw-r--r-- 1 backup backup 1732220 Jul  5 16:40 ffff39779683173cf8a704cc525537c600b7bd38f19d30e765bd5953d532091b
-rw-r--r-- 1 backup backup   85894 Mar 29  2024 ffff3b52db5ed809a4ea01c380aed0a3b5e2584786aff84da2ad0e4618bad506
-rw-r--r-- 1 backup backup  972600 Dec 17  2023 ffff3ccab94835481d04fff0a1599ad40da7ec9a6c4a16daa1a769d22b95e2ff
-rw-r--r-- 1 backup backup 4194316 Aug 24  2023 ffff415dbae4e2735abdf779f9bf8affa2be6a43ac1776ef51f16576c17b29a5
-rw-r--r-- 1 backup backup  388308 Sep 28 03:43 ffff467b1b05c2d3c8c8babfb21dfb4acf8f8c33b6f82bac506e23d71150513d
-rw-r--r-- 1 backup backup 1122287 Nov 14  2022 ffff5f05b47a84f2655b0fb748a369a2e1736b34d94a853b88f82da1cffbfd20
-rw-r--r-- 1 backup backup  743526 May 11 10:20 ffff636fd2a434c90fafdc1a64b6ba00c8fdb8df2cdb9c6086f18a63796bd978
-rw-r--r-- 1 backup backup 1015297 Sep 28 03:40 ffff6441036f16565721152a1427119cab0b1180dbee5cc53c0cd26e0b716a9f
-rw-r--r-- 1 backup backup  492373 May 19 03:46 ffff6e657033ac04ed1c2044358f12586c27329a20e506904080f6998de75f53
-rw-r--r-- 1 backup backup 1762408 Dec 31  2022 ffff6f42383717ec57929dd18186515e23f35aeb4bc5b6318eefa06ed52dc340
-rw-r--r-- 1 backup backup 2074187 Dec 19  2021 ffff70cbe6bbd01520fd561cb6a0f66df398ffe5f088cad00fbac1d18db79c4e
-rw-r--r-- 1 backup backup  284209 Jul 20 09:26 ffff776b4fbe3d8faa24721b217e3862568f3e4e20c07ee08bf62303ce251fb7
-rw-r--r-- 1 backup backup  602787 Nov  2 14:36 ffff885dc925430bffca6bb200c7a30a291519eb18209351a828c5743b3201dc
-rw-r--r-- 1 backup backup 4194316 Mar 29  2024 ffff8c9281aea5094c9323584b162b64dffc6f31aa56122dbbc0a635309d87fa
-rw-r--r-- 1 backup backup 4194316 Dec  8  2021 ffff8f124690ba74567199eb8d3f6cba0536a0721567fff5ae2780f7976696db
-rw-r--r-- 1 backup backup  565061 Feb 18  2024 ffff937531d48ed50d44bb58f56260bd19eea6f33d531441814f1687edeaddb3
-rw-r--r-- 1 backup backup  840529 Mar 29  2024 ffff98136d46b336a9b7a7cddcbdfc147ef1cbe98fc1d7022ae54e46c25fc4d9
-rw-r--r-- 1 backup backup  298051 Aug 31 11:19 ffff9b32856ee935fcb5af38e2cf49f71312debd90d62e6aa5f1a3e3976f0d5c
-rw-r--r-- 1 backup backup  829905 Oct 20 10:40 ffffa67d49effc0d385f3bd43fe0bfad96483a26ed025de7c8af18d123bd729b
-rw-r--r-- 1 backup backup  976994 Feb 24  2024 ffffa6f3d39c0ff96ca0fe797cd6565a2859991edf63248598c46811e9cfff7b
-rw-r--r-- 1 backup backup 1331774 Oct 20 11:55 ffffcef3d3e6e0db2ec26b67ab995c02787a50b0a4d39c8e02246a3730eef629
-rw-r--r-- 1 backup backup 1802265 Nov  2 15:42 ffffd6dbdb514dad04c0576fb7c61572f3b0be28f1f9aba7f6111c07808aa23a
-rw-r--r-- 1 backup backup 4194316 Sep 28 04:02 ffffdfccce79aad219647c5363ae277d8b79783dbccdadf097126ab0ef7fba1b
-rw-r--r-- 1 backup backup 4194316 Dec 21  2022 ffffe9a1ee4dc913aea117ec4e27ea9cb405a2f4b8b2508871a8f750bfa55e75
-rw-r--r-- 1 backup backup  870853 Oct 26 03:29 ffffeead733cb1e5203bedf3165db8e6236adb4ee687285ff92ea67a357effd5
-rw-r--r-- 1 backup backup  229058 Oct 12 15:15 fffff67966935e99a3b47d6d4f6fa74de612f6efea8ea7ee7f948ecc993fb7b0


On a first glance I see five files with the full 4MiB in size. Everythings else needs less space...

Note that this happens on application layer, not in ZFS. PBS is filesystem-agnostic :)
 
  • Like
Reactions: Johannes S
Yes, that's correct - as far as I understand.

And then compression is added. The result looks like this:
Code:
.chunks# ls -Al ffff/
total 49094
-rw-r--r-- 1 backup backup 2394832 Oct 20 10:50 ffff0208e856de0ee15d1e048578ec0db60a5ac0ccd3b636639bc32efad04176
-rw-r--r-- 1 backup backup 1667526 May 19 03:36 ffff0601afee6c539f451601015e3a405a4d0d36812e2f9360aee0314c851b13
-rw-r--r-- 1 backup backup 1866092 Sep 28 03:49 ffff08132bb9d0002f264d785f6ee424949c10bf09451679ea3cd2e2aa5e5dc6
..
-rw-r--r-- 1 backup backup 1047460 Aug 31 12:31 ffff23d692a02da529d5adb6315d960bf5f625afb7536ed44378a4314348e9e0
-rw-r--r-- 1 backup backup  398832 Sep 22 07:44 ffff2ca487f2276e1fcbdcdf6a85fa83f097bedd7ab4c77af4ff500cc98004c3
...


On a first glance I see five files with the full 4MiB in size. Everythings else needs less space...

Note that this happens on application layer, not in ZFS. PBS is filesystem-agnostic :)
ah, yes, of course.
I have just ordered 3x960GB to use for special device. I think I will put them in a RAIDZ and steal some space for the boot/os.
Maybe I will try small_files too. perhaps everytyhing under 512kb.
Here is what I see:
Code:
root@pbs:/mnt/datastore/RAIDZ/.chunks/aaaa# ls -lh
total 94M
-rw-r--r-- 1 backup backup  2.9M Aug  2 21:16 aaaa028d5af272447599c3fd3363406bc25147b4d9dda3947c5ea49e46340a4f
-rw-r--r-- 1 backup backup  2.5M Aug  2 21:09 aaaa06449c780baa060e6cf1579bf7971df2f0e45fecd525277c7840ddfad3b6
-rw-r--r-- 1 backup backup  4.1M Jul 30 01:49 aaaa06c387d4a6d818a21818b19d0ab14a252e2df876dab8b12dcd17684bbd09
-rw-r--r-- 1 backup backup  467K Aug  5 21:01 aaaa086fea34a5f0f7336c6974da3ab2ea356da45526fb25d2753f0457eebb6a
-rw-r--r-- 1 backup backup  1.8M Sep  9 21:04 aaaa14ffdf1ded6bec17cd325fa60cf6175c54a4857f50330a631444f73d9406
-rw-r--r-- 1 backup backup  303K Jul  5 21:12 aaaa15708de72d1185d5f7553612c1c65c047242d98dfbea8fc245a4cca59e0f
-rw-r--r-- 1 backup backup  949K Aug  2 21:02 aaaa185ae2f86c349aef8757ead02d527ec2a4a9892c0a25500553d7add6a4ba
-rw-r--r-- 1 backup backup  307K Oct  1 21:01 aaaa187d130aeae5aae2d133c4c2613e83f3638587f3044d4fdbd7b60bdc2ccc
-rw-r--r-- 1 backup backup  713K Jul  1 21:01 aaaa1ebdc6033375f8b00b976b56b91bd2ccfe1432b14cf489d52d0a00cdc998
-rw-r--r-- 1 backup backup  319K Oct 11 21:02 aaaa31b36254ccba59988d2492f2d909c5b58dd22e23bf5a1ce3788f220a8adf
-rw-r--r-- 1 backup backup  2.9M Sep  2 21:04 aaaa338bf0c695df497bade71d3d39410adca6668e594e92e0433fd44965466c
-rw-r--r-- 1 backup backup  1.5M Aug 12 21:04 aaaa3636ef09b134f74645dfbedbef56095f92f2f4271d071682cb3919bfd3c6
-rw-r--r-- 1 backup backup  307K Sep 23 21:01 aaaa3a57c92babeccfcbda28f8e0cb139ec1e93609fe441c5ba020ab54459004
-rw-r--r-- 1 backup backup  1.8M Jul  4 21:02 aaaa3afebad2a7a30b453112387b66679b13f85ebae9028b53f5eae31e4956bb
-rw-r--r-- 1 backup backup  362K Jul 16 21:01 aaaa46397c0cf1ee297d0eceeb8624f90ffcbb31d7cd189f46f43ac25f428702
-rw-r--r-- 1 backup backup  3.1M Jul 29 23:58 aaaa48077d23feac796cce0dd8a451925edf54f88979c2b1323de6bf061b9f77
-rw-r--r-- 1 backup backup  399K Jul 29 23:57 aaaa566cfd100c21b3cc860fb1b800028f7abda84f46d5a9ae13cb64955798c0
-rw-r--r-- 1 backup backup  625K Jul  5 21:10 aaaa5b9f34725f8249f9cdda9f17ee63dc654a1ed5a2c526e9c44b05e7a9c1ad
-rw-r--r-- 1 backup backup  287K Sep  4 21:01 aaaa608cfd0906569947a73384aa6949627654293148eab0e41dfce5d521cb7c
-rw-r--r-- 1 backup backup  326K Oct 10 21:02 aaaa646379eab3160d5c25b23f1a1fef553ba2d074c152219a5c83cb432ebbd8
-rw-r--r-- 1 backup backup  1.7M Sep  8 21:03 aaaa6561459f7f2f9b33231f32587bfb9e5d572fe99afec7812865cb446c82ee
-rw-r--r-- 1 backup backup  2.2M Sep 14 21:02 aaaa65a7ecc60c67af53a6c9d8791a46c918097574f7f697e76f94e0aad7745e
-rw-r--r-- 1 backup backup  1.6M Aug 13 21:03 aaaa6b0957f2819d5a740dc47bb2105c2d210d229549f368d69fae9df190cc6c
-rw-r--r-- 1 backup backup  829K Jul  7 21:02 aaaa6b640760908c63b6a0fa5bf979b316dfbaca113778786c4db1e9a886bee8
-rw-r--r-- 1 backup backup  708K Sep 13 21:05 aaaa6e50b2e7ec5ba0c9c1b05d41b9fbe7ce7faec2479b89b26daff12c7e3652
-rw-r--r-- 1 backup backup  2.6M Jul 30 00:09 aaaa71da67c7fbee1c743657e43a9d41dcd4c3df23d1b9b5d8d903699c5d7a49
-rw-r--r-- 1 backup backup  393K Jul 30 21:06 aaaa77b10125b5915f22fd18c504850708ce68c6767066019a7cfdff194818f3
-rw-r--r-- 1 backup backup  278K Oct  9 21:01 aaaa7a16a310501d53b8a93d1d5630bfd5bbd905ac31f468ab0b7d200f790e68
-rw-r--r-- 1 backup backup  282K Jul  9 21:01 aaaa8063c0ba7a8851b65a40637e5576f371dc36bcf405192dea6170651bed16
-rw-r--r-- 1 backup backup  1.5M Jul 29 23:52 aaaa889830643ae9d68a08a3462e581ed1075a82620b16c44d0a64cd49ce45c2
-rw-r--r-- 1 backup backup  1.5M Sep 30 21:06 aaaa8a70e1057ef3202d12581773548edd8b77c7bdcab1f0a77cc1003a674e70
-rw-r--r-- 1 backup backup  2.3M Aug  7 21:02 aaaa8b4d4bc2828c493ad98d05d71fe50b7060e8402faa5eb15b86c957701f20
-rw-r--r-- 1 backup backup  531K Aug 12 21:02 aaaa900f84857c2bd66deba2ae493c660c2f27bfaee44f4a1fbf93b442e7216d
-rw-r--r-- 1 backup backup  1.4M Jul 31 21:02 aaaa92965db5b76a589d046ba057694c59365222ab1b9b6e75d6202f95f5e2a5
-rw-r--r-- 1 backup backup  797K Sep 12 21:03 aaaa987cebd1aaff87605492df441aba9fd64bfbf423277ed76ef6f8f2758373
-rw-r--r-- 1 backup backup  2.7M Jul 29 23:47 aaaa9883245be8deafad243ff962dbdaaa118cf5d82ad5334e8408af945c12ec
-rw-r--r-- 1 backup backup  1.7M Aug 26 21:02 aaaa997f92cc4e6caa6da32a0690f3aa16b27927a6ad860a5c8a6bea333ce9af
-rw-r--r-- 1 backup backup  2.6M Oct  9 21:08 aaaa9a7733d82ef29cf03fd62d6d4d338d532f63f9613785e64211363cb8535f
-rw-r--r-- 1 backup backup  976K Jul 30 02:26 aaaa9b211064b638b48c610f8906f2a689a7149cb33627781655df2a8933561c
-rw-r--r-- 1 backup backup  392K Aug 21 21:04 aaaaa1e0cd58aed8232a0334eb8efade482de10db6427fd8ed6e31bb749ec1ed
-rw-r--r-- 1 backup backup  408K Sep  6 21:01 aaaaa2a1cd4ad9d4e5c164720168ed4bdcb73d5a5389204dfff379c9c8147fef
-rw-r--r-- 1 backup backup  448K Aug  2 21:03 aaaaa5e835965951c8641b5e974516e06061f142db3c844301068204263fdfba
-rw-r--r-- 1 backup backup  2.9M Sep  3 21:04 aaaab5df39e4966081b517aff2f12a75b352b7da280daccb340642aec1f3dc9c
-rw-r--r-- 1 backup backup  4.1M Jul 30 02:05 aaaabd6e4ee348b8a0531af9e1cf243ce1c36adca4d20bde1a04b76980d39cbb
-rw-r--r-- 1 backup backup  446K Oct 20 21:01 aaaabf913fdb242887fe1188d3175e5355f585325e4a23e7aed3d76b9486cd1c
-rw-r--r-- 1 backup backup  354K Jul  8 21:01 aaaabfa17f989533d64ea96b8bd17a1c2d5d4cf16b89421f20ad133bd1a0bf0f
-rw-r--r-- 1 backup backup  2.4M Oct  7 21:03 aaaabfbf70ca6951ed07cf881604c635493237ddc62d0b796b0597126ca0f864
-rw-r--r-- 1 backup backup  257K Jul 21 21:01 aaaac0a0f51ceec77a5eac7c27afd97cea2217486774384bc1bd86851c2c452c
-rw-r--r-- 1 backup backup  1.3M Oct 16 21:01 aaaac3a74a51e109df5bbe748d1fbe91a270456520c278425c2418acb1d9a329
-rw-r--r-- 1 backup backup  270K Aug 20 21:01 aaaac4aed257e60d688fe8233612a5540dede5e77c05fbae0c798bb3613e47b3
-rw-r--r-- 1 backup backup  1.9M Aug 30 21:05 aaaac50afa9f9472688f1a5b15549c460c8e1afb1583b0e7e72b898a46d6ea40
-rw-r--r-- 1 backup backup  3.5M Jul  1 21:02 aaaac56929d8ad9ab09e51c52e9983fa15807f1485bba86568e9623eeeedcd2c
-rw-r--r-- 1 backup backup  3.0M Aug  2 21:18 aaaac5b32cbfe01942d81ce923cb09a0f6032389c7ceb0f5ab186913f49a004a
-rw-r--r-- 1 backup backup  1.6M Aug  9 21:03 aaaac68ce23f1a7dd2929ccbb55cc0421e716bbf66cbcd540291a022174d304c
-rw-r--r-- 1 backup backup  288K Jul 19 21:01 aaaaca09863943a538a50b2d0b624d85b44d2607112794389176bc1cf1e29847
-rw-r--r-- 1 backup backup  3.1M Jul 30 01:58 aaaacf9b98be84de414ad19cda8e04a8210f160e0c7c4cdb500c2d5b9b02e70d
-rw-r--r-- 1 backup backup 1014K Aug 30 21:03 aaaad5953a202559c6b941863187707efacc1fff77158db35d10420116d951ff
-rw-r--r-- 1 backup backup  3.8M Sep 15 21:02 aaaadaea5f60aafeadd3952c6bc49dfabfc379891e64498760e4178a7c6ad990
-rw-r--r-- 1 backup backup  776K Oct  8 21:04 aaaadb686395045635effcd66de2dbdcc7a76bfec454069c0662db4d206b0634
-rw-r--r-- 1 backup backup  1.4M Jul 30 01:03 aaaadbd6392b6c9205fd59d3528e1f58c56299dfb02180ff59569b5fa0e9d439
-rw-r--r-- 1 backup backup  322K Oct 18 21:01 aaaae3c15ba84cd5ab917fa974195d41855aef0ced5a1192bf3f38af3455e869
-rw-r--r-- 1 backup backup  3.0M Oct  8 21:07 aaaae474ef9e31b11b607a26c759373e51330193f387de13cb9fc5c29adbf348
-rw-r--r-- 1 backup backup  334K Nov  1 21:02 aaaae6db2bc83db642ba4b84530e4dda4b5d57c3029cda6a0c9cc72002761f53
-rw-r--r-- 1 backup backup  1.9M Oct  7 21:06 aaaae78a95855b027051650958d2775589cd054e6ae8c6b36ca110606e809ea5
-rw-r--r-- 1 backup backup  269K Oct 17 21:01 aaaaed43ad034978551f1fbf23047c0796feaff78e9fcd4c5767c138cf10ac18
-rw-r--r-- 1 backup backup  294K Sep 11 21:01 aaaaf15d00b211be862a133ab2ce0611ac93f76719b1294298a710798201adec
-rw-r--r-- 1 backup backup  1.9M Jul 31 21:05 aaaaf559d9364d53cc533cf8504e6fc670806ec232cf021cfb082f0a4cf29656
-rw-r--r-- 1 backup backup  299K Aug  1 21:13 aaaaff357739854eb2dfbd87027f9fa7514db7ca05dbd1145a857c626c0aa5d3
 
  • Like
Reactions: Johannes S
Just did a small benchmark to simulate PBS storage restore performance:
Setup data in testfilesystem:
./elbencho -w -d -t 2 -n 128 -N 256 -s 4m -b 1m .
OPERATION RESULT TYPE FIRST DONE LAST DONE
========= ================ ========== =========
MKDIRS Elapsed time : 2ms 2ms
Dirs/s : 96567 96240
Dirs total : 256 256
---
WRITE Elapsed time : 1m45.698s 1m46.216s
Files/s : 618 617
IOPS : 2472 2468
Throughput MiB/s : 2472 2468
Total MiB : 261373 262144
Files total : 65343 65536
---
====================================================
* 22x 1,8TB 10k hdd perc H730p raid6 xfs, 2x Xeon 2683 v4 @2.1:
./elbencho -r -t 2 -n 128 -N 256 -s 4m -b 1m .
OPERATION RESULT TYPE FIRST DONE LAST DONE
========= ================ ========== =========
READ Elapsed time : 25.372s 30.048s
Files/s : 2380 2181
IOPS : 9523 8724
Throughput MiB/s : 9523 8724
Total MiB : 241641 262144
Files total : 60410 65536
---
echo 3 >/proc/sys/vm/drop_caches
./elbencho -r -t 2 -n 128 -N 256 -s 4m -b 1m .
OPERATION RESULT TYPE FIRST DONE LAST DONE
========= ================ ========== =========
READ Elapsed time : 2m33.722s 2m45.220s
Files/s : 404 396
IOPS : 1619 1586
Throughput MiB/s : 1619 1586
Total MiB : 248901 262144
Files total : 62225 65536
---

* 24x 16TB 7.2 hdd raidz2 4 vdevs, 2x Xeon 6226R @2.9:
elbencho -r -t 2 -n 128 -N 256 -s 4m -b 1m . # Run after the write so theoretically lots of in arc
OPERATION RESULT TYPE FIRST DONE LAST DONE
========= ================ ========== =========
READ Elapsed time : 15m20.809s 15m22.292s
Files/s : 71 71
IOPS : 284 284
Throughput MiB/s : 284 284
Total MiB : 261933 262144
Files total : 65483 65536
---
Zfs special device would not help in throughput, mirrors would help but don't know how much to catch up ...
 
Last edited:
PS: elbencho - A distributed storage benchmark for file systems, object stores & block devices with support for GPUs. Mostly used for checkup distributed filesystems like beegfs, lustre, ceph, ... https://github.com/breuner/elbencho , download is a static binary tar.gz and run without any further dependencies needed.
./elbencho --help
elbencho - A distributed benchmark for files, objects and blocks

Version: 2.3-1

Tests include throughput, IOPS and access latency. Live statistics show how the
system behaves under load and whether it is worth waiting for the end result.

Get started by selecting what you want to test...

Large shared files or block devices (e.g. streaming or random IOPS):
$ elbencho --help-large

Multiple dirs and files per thread (e.g. lots of small files):
$ elbencho --help-multi

S3 object storage:
$ elbencho --help-s3

Multiple clients (e.g. shared file systems):
$ elbencho --help-dist

See all available options (e.g. csv file output):
$ elbencho --help-all

Happy benchmarking!
 
Last edited:
I see. So you are saying that there won't be an improvement when I add the SSD special devices ? hmm.
oh well. I have 2 of the 3 SAS SSDs now. The third went missing so I have not made the changes yet anyway.
 
I see. So you are saying that there won't be an improvement when I add the SSD special devices ? hmm.
That's a misunderstanding ! zfs special device (in mirror) would definitiv help pbs a lot in backup, garbage collection and prune jobs
but for the showed throughput benchmark which simulates 260GB of a restore which yet doesn't benchmark file stats like a special device would not change much. Sorry if you are going into confusion for.
 
  • Like
Reactions: Johannes S
That's a misunderstanding ! zfs special device (in mirror) would definitiv help pbs a lot in backup, garbage collection and prune jobs
but for the showed throughput benchmark which simulates 260GB of a restore which yet doesn't benchmark file stats like a special device would not change much. Sorry if you are going into confusion for.
Yes I am referring to 'in restore' (that's what the thread is about.. really poor restore performance)
 
Use nvme's when using zfs for pbs or use a filesytem which is even fast if amount of data is >2x zfs arc as you see zfs is like a floppydrive when accessing data outside arc.
 
  • Like
Reactions: Johannes S
I have installed PBS on another server (Poweredge T330).

It has 3x 960gb SAS12G enterprise SSD, 100GB used for OS RAIDZ. remaining left.
and 4x 4tb HDD.

I have partitioned the HDDs in half.

I will create a zpool using half of each HDD in 4-disk RAIDZ with a RAIDZ special device from the 3x SSDs using small files <512KiB and metadata.

I will create another zpool on the other half of each disk but not using special device.

I will do a PVE backup of the SAP & SQL server to each pool and see what the difference is on restore.
 
OK so the special device can only be a mirror or single disk. thats fine.

First thing: the actual backup is slow too. Most of the 1.1TiB VM backup (of which I think only perhaps 200G is data) is moving at 60MiB/s. Occasionally it jumps to 500MiB/s but most of the time hovers at 60.
For the backup, it is the same whether I use special device with special_small_files=512K or no special device
 
OK so the special device can only be a mirror or single disk. thats fine.
PS: Single disk should be avoided as when failed all pool data is lost. Mirror could be 2 (or even more for security) and could be expanded when full to additional mirrors (of 2 or 3 ...).
 
  • Like
Reactions: Johannes S
PS: Single disk should be avoided as when failed all pool data is lost. Mirror could be 2 (or even more for security) and could be expanded when full to additional mirrors (of 2 or 3 ...).
Yes well since I had bought 3 disks I have done a 3 way mirror for the test
 
  • Like
Reactions: Johannes S
Do some tests about creating million files and deleting time also if possible to you.
 
Backup of SAP&SQL server to 4x4tb RAIDZ HDD with SSD special device small-files=512K: 15min.
Restore: 19m20s

Backup of SAP&SQL server to 4x4tb RAIDZ HDD without special device: 15min.
Restore: 26m15s

Next I will create a RAIDZ of just the SAS12 SSDs and see how that looks. Or perhaps I will try to restore to a different set of disks on the PVE host. It has 5x 1.92 intel/solidgm in a RAID5 on h965, or 3x960gb on a second h965 also in RAID5.
 
I have created a RAIDZ from the 3x SAS12 SSD (800gb partitions on each) on PBS. Currently running a backup. Most of the time hovering around 55 - 70MiB/s. Occasionally spikes to 500MiB/s for a short while.
 
  • Like
Reactions: Johannes S
Thanks for all your shared test experience giving to this community :) What about amount of data backup'ed / restored to measured time ?
 
  • Like
Reactions: Johannes S
Restore from enterprise SAS12 SSD in RAIDZ, took 18 minutes at 45 seconds. That is.. 40 seconds quicker than the HDD + Special. However I have no idea how much data went to special and how much to HDD.

next test, after I have changed the pet bedding, is ext4 instead of ZFS, single SAS SSD disk.
 
Thanks for all your shared test experience giving to this community :) What about amount of data backup'ed / restored to measured time ?
On the PBS, it has taken 176GiB. The source virtual-machine is 1.1TiB in disk size but not fully allocated.

All the above tests are with zfs compression disabled, but PVE/PBS does zstd already I think.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!