Benchmark ideas for a ghetto build

IsThisThingOn

Active Member
Nov 26, 2021
148
44
33
I want to run some benchmarks for a ghetto build and need some ideas from you guys.
I want to use some old hardware I have laying around.
The build will be installed in an offsite location with 10GBit. My goal is to reach at least 1GBit.

Unfortunately I haven't really found a lot of these benchmark results
https://pbs.proxmox.com/docs/backup-client.html#benchmarking in the forum.

Here are the different options I am thinking of running a benchmark.

1. This is an old Intel NUC NUC7i3BNK with 8GB RAM. Only fits one fast NVME. Of course that would not protect me from a single drive failure, but hey, SSDs are mostly pretty reliable. Would install it as ext4.

2. This is the same as above but with ZFS. Not sure if ZFS brings any advantage here, since PBS comes with checksums and stuff. Maybe some advantages because of the ARC? But slower writes because of ZIL?

3. An old AMD Ryzen 1700 build with 16GB RAM. Two very old 8TB HDD drives in a ZFS mirror.

4. Same build but with a special vdev consisting of an old 500GB Samsung Evo 870 and an Intel 535.

Any ideas? Inputs?
 
PBS require fast disks as it's store all data into thousands subfolders of .chunks folder, then data isn't more sequential.
option 1 is the fastest.
Forget ZFS if this is not datacenter drive with PLP.
PBS isn't comparable to VZDUMP as PBS always do differential and dedu backup, for daily backup it's the solution.
 
Last edited:
I would only need a weekly backup.
I don't doubt that an NVME would be the fastest option, but I wonder how much worse the other options are.

PLP for ZFS is a little bit overkill in my opinion. Sure sync writes are faster, but I normally don't have many sync writes.
 
ZFS write extra data, even with PBS which write only new data, ZFS wearout disk more than ext4.
I don't see the advantage of ZFS over one disk.
PBS has its checksum verification and its Remote Sync.
 
Last edited:
Hmmm... not sure if ZFS really is slower or has more wearout than ext4.
This depends on PBS right? If PBS does not write data with sync, wearout should be no different?
 
Ok. time for some benchmarks. I will use this thread as my notebook ;)

Ghetto PC is an old AMD Ryzen 1700 8core CPU.
16GB RAM
SSD is an Samsung 960 EVO 500GB.
In theory writes at 1,900Mbit and 360k iops.
HDD is an Toshiba MG10 20TB
 
Last edited:
welp, opening anything with nano will crash the system. Will have to find some other hardware.

Anyway I just realized that the Proxmox Benchmark tool does not really benchmark the disk itself but other parameters.
Since my NUC does not support SATA or 3,5 drives, I guess I will have to use a single nvme drive.
 
Thanks, that was an interesting read, although I am not sure if I understood everything.
Let's see if I better can describe what I am looking for.

I am looking for an offside PBS. Currently I am doing pretty fast backups (200MB/s) to a single ext4 HDD.
My main problem is that in a case of fire or theft, I have no offsite backup. And of course 321 and ransomware protection.
So the idea is that I install a PBS in our second office and backup to that with TLS. Both offices have 1Gbit fiber.
So my main thinking and question are these:

  1. I am not really sure what performance I can expect from a single HDD vs HDD + special vs SSD.
  2. As far as I understand it, the proxmox benchmark TLS results on PVE will show me one bottleneck, the TLS performance on PVE. It will not do any actual storage benchmarks.
  3. Your github project tries to emulate a real workload by writing random chunks
  4. I could run your "Run a simple test on your local disk / the disk you already mounted" on the PBS machine (which will become my remote machine) to get an idea how these combinations from question #1 performs.
  5. Real world performance will probably be way worse since there is some added latency. Do you think comparable to your SMB/NFS results?
  6. Probably going to use ZFS instead of ext4, just to be save like Dunuin described here.
 
Last edited:
Its not good only 1 HDD to use, setup ZFS Mirror-0 with 2x HDD and Mirror-1 2x HDD and ZFS Special Device with 2x SSD apr. 10% Raw HDD Space.

Remember to set a ZFS required quota of aproximate 80% of the resulting space of the ZFS Pool, with the 3 VDEVs.
zfs set quota=<pool-space> <pool-name>

If you can know HDDs have only around 120 IOPS, SSDs more than 10k IOPS.
All ZFS Metadata will then store in that VDEV ZFS Special Device with 2x SSD, or more.
And you have now ZFS Mirror-0 with 2x HDD and Mirror-1 2x HDD for the raw data.
So your ZFS Pools has:
  • Read IOPS: aprox. 4x read IOPS HDDs,
  • Write IOPS: aprox. 2x write IOPS HDDs and
  • The speed of the SSD mostly did not matter in this setup
  • You can also setup the ZFS Special Device with more than two SSD, there ore more speed up a dircotry listing ls -lA.
  • ZFS and the Proxmox Backupserver use Millions of 4k random read/write to access the ZFS Pool and the Dataset
Use only enterprice SSD (SATA III) with PLP, example Kingston DC600M.

# https://www.kingston.com/en/ssd/dc600m-data-center-solid-state-drive

In Germany i have Alternate.de to order and picup the Parts.
 
Last edited:
I have PVE on location A.
I have PBS on location B.

I wan't to backup from A to B over TLS. That should be possible, right?
 
it's not recommended to backup to PBS over WAN if guest is running.
Guest is slowdown during live backup up to crash guest if connection break or too slow, there is a new fleecing option to mitigate but fails can occur.
the more robust and the recommended way is PBS local to PVE.
(PBS can even be installed as package over PVE on same host).
Then PBS offsite pull/Sync data from first site. Here this is what I do.
 
Thank you guy for your input.
Since I stop my machines before a backup don't need more than 100mb/s from my Gigabit WAN, I think I will ditch the local PBS.
 
I finally had the time to run some benchmarks :)


On PVE
SHA256 speed: 614.09 MB/s
Compression speed: 592.06 MB/s
Decompress speed: 846.54 MB/s
AES256/GCM speed: 4700.58 MB/s
Verify speed: 354.22 MB/s


From PVE to PBS
proxmox-backup-client benchmark --repository 10.0.51.50:test
TLS speed: 117.54 MB/s
SHA256 speed: 615.98 MB/s
Compression speed: 590.80 MB/s
Decompress speed: 839.22 MB/s
AES256/GCM speed: 4712.22 MB/s
Verify speed: 352.30 MB/s

TLS speed seems to be line speed. Some of my sites have 10Gbit, others only have 60Mbit Upload.
So my milage will probably vary.

On PBS
SHA256 speed: 324.15 MB/s
Compression speed: 332.37 MB/s
Decompress speed: 409.43 MB/s
AES256/GCM speed: 1945.79 MB/s
Verify speed: 179.43 MB/s


From PVE to PBS ext4 external USB 3.0 2,5“ HDD
First run:
INFO: backup is sparse: 18.66 GiB (58%) total zero data
INFO: backup was done incrementally, reused 18.66 GiB (58%)
INFO: transferred 32.00 GiB in 190 seconds (172.5 MiB/s)
Second run with no changes:
INFO: backup is sparse: 18.66 GiB (58%) total zero data
INFO: backup was done incrementally, reused 32.00 GiB (100%)
INFO: transferred 32.00 GiB in 33 seconds (993.0 MiB/s)

This is probably the worst possible option but it still performs reasonably well, at least for the second run. The second is still faster than the NFS run that we see later.

From PVE to PBS ext4 INTEL SSDPEKKW128G7
First run:
INFO: backup is sparse: 18.66 GiB (58%) total zero data
INFO: backup was done incrementally, reused 18.66 GiB (58%)
INFO: transferred 32.00 GiB in 54 seconds (606.8 MiB/s)
Second run with no changes:
INFO: backup is sparse: 18.66 GiB (58%) total zero data
INFO: backup was done incrementally, reused 32.00 GiB (100%)
INFO: transferred 32.00 GiB in 32 seconds (1.0 GiB/s)

Intel run was faster thanks to the drive being an NVME SSD. Second run on the other hand was not faster than on the HDD.
Do you guys have an explanation for that? Some other bottleneck?


NFS share RAIDZ2, special vdev
transferred 32.00 GiB in 40 seconds (819.2 MiB/s)

Pretty fast, but not incremental.
 
Last edited:
Any idea what the bottleneck on the second run is?
Is it the host that itself that is to slow and that is why the results for the HDD and SSD are almost the same?
 
You are asking interesting questions here that deserve attention.
This is a long-ish thread.
How about you re-frame that question?

In order for us to understand, briefly summarize the config for the run and how it should relate to other configs.
And then ask us why its different from your expectations.
 
  • Like
Reactions: Johannes S

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!