PBS Expected speeds and performances

Taledo

Active Member
Nov 20, 2020
80
10
28
54
Hi all,

Following https://forum.proxmox.com/threads/hard-drive-strategy.137806/, We've managed to convince the bean counters to allow us a bit of money to get backup servers.
Now, the configuration I got is a bit different from the suggested configs in the previous threads.

For now, we've settled for two 740XD with :
  • 2x Intel Xeon Gold 6154 CPUs
  • 128 Gigs of ram
  • 16 * 4Tb NVME enterprise drives (sadly the 16Tb drives were out of our budget)
  • 2 * 500G drives for the OS
  • 2*10G Network card

Yes, I know this isn't the best, but considering our current, production PBSs are still rocking a AMD Opteron Processor 4334 (bet its been a while since you've seen an opteron)...

So far, I've set them up in my lab and installed a PVE on one and a PBS on the other one.
They're connected locally by a 20G LACP L3-4 LAG.

For now, I've set the datastores as a RAIDZ2 on the PVE and PBS. Haven't touched the ZFS settings much beyond setting the PBS compression to LZ4.

Here are the bench values :

Code:
PBS :

root@pbs-test-ng-1:~# proxmox-backup-client benchmark 

SHA256 speed: 470.62 MB/s   
Compression speed: 446.17 MB/s   
Decompress speed: 624.96 MB/s   
AES256/GCM speed: 3028.92 MB/s   
Verify speed: 268.31 MB/s   


PVE :

Uploaded 754 chunks in 5 seconds.

Time per request: 6669 microseconds.
TLS speed: 628.84 MB/s   
SHA256 speed: 478.59 MB/s   
Compression speed: 444.29 MB/s   
Decompress speed: 600.26 MB/s   
AES256/GCM speed: 3335.12 MB/s   
Verify speed: 267.75 MB/s

Real life wise :

I've set up a VM on the PVE with a 100G drive, and created a 70G file from /dev/urandom values.
Here's the backup result :

Code:
INFO: backup is sparse: 19.18 GiB (19%) total zero data

INFO: backup was done incrementally, reused 19.21 GiB (19%)

INFO: transferred 100.00 GiB in 284 seconds (360.6 MiB/s)

Restore wise :
Code:
restore image complete (bytes=107374182400, duration=519.92s, speed=196.95MB/s)


Now, I've yet to properly play with resilvering and disk replacement. Maybe 16*4Tb will take forever to rebuild.

In any case, interested in y'all "real life" values for backups. The 200MB/s restore seems a bit underwhelming for a full on flash PBS, but there might be something else at play here.

Best Regards,

Taledo
 
  • Like
Reactions: Johannes S
Nice solution and quiet good pbs results.
We have a real quiet unusual solution complete without vzdump or pbs on a bunch of even older hardware:
PVE on Dell T620, 2x 6-core Xeon E5-2620 @ 2GHz +HT, 32GB, with dr-backup store 8x4TB hdd as perc p710 raid6 with xfs, pbzip2'ed and duperemove'd, 1x 10Gbit between
Fileserver with rocky9.5 on Lenovo x3650 M4, 1x 4-core Xeon E5-2609 v2 @ 2.50GHz +HT, 128GB, 10x4TB hdd as ServeRAID M5110 raid6 with xfs.
Demo 32GB vm disk vm-158-disk-1.qcow2:
Backup as reflink 32GB: 0.002s on fileserver, 32774MB/0.002s = 16 GB/s
Backup to pve storage by nfs-copy: 59,4s, 32774MB/59,4s = 550 MB/s
Restore as reflink 32GB: 0.002s on fileserver, 32774MB/0.002s = 16 GB/s (having ~11.000 disk versions available for restore of actual ~170 vm/lxc disk)
Restore from pve storage by pbunzip to nfs: 71,1s, 32774MB/71,1s = 460 MB/s
PS: Fileserver serving ~60 running vm/lxc (out of 80 (20 off) on 4 pve out of 5) and pve running few lxc/vm and duperemove on their raid6 while doing this tests, so where not without other load like yours.
 
Last edited:
Hi all,

Following https://forum.proxmox.com/threads/hard-drive-strategy.137806/, We've managed to convince the bean counters to allow us a bit of money to get backup servers.
Now, the configuration I got is a bit different from the suggested configs in the previous threads.

For now, we've settled for two 740XD with :
  • 2x Intel Xeon Gold 6154 CPUs
  • 128 Gigs of ram
  • 16 * 4Tb NVME enterprise drives (sadly the 16Tb drives were out of our budget)
  • 2 * 500G drives for the OS
  • 2*10G Network card

Yes, I know this isn't the best, but considering our current, production PBSs are still rocking a AMD Opteron Processor 4334 (bet its been a while since you've seen an opteron)...

So far, I've set them up in my lab and installed a PVE on one and a PBS on the other one.
They're connected locally by a 20G LACP L3-4 LAG.

For now, I've set the datastores as a RAIDZ2 on the PVE and PBS. Haven't touched the ZFS settings much beyond setting the PBS compression to LZ4.

Here are the bench values :

Code:
PBS :

root@pbs-test-ng-1:~# proxmox-backup-client benchmark

SHA256 speed: 470.62 MB/s  
Compression speed: 446.17 MB/s  
Decompress speed: 624.96 MB/s  
AES256/GCM speed: 3028.92 MB/s  
Verify speed: 268.31 MB/s  


PVE :

Uploaded 754 chunks in 5 seconds.

Time per request: 6669 microseconds.
TLS speed: 628.84 MB/s  
SHA256 speed: 478.59 MB/s  
Compression speed: 444.29 MB/s  
Decompress speed: 600.26 MB/s  
AES256/GCM speed: 3335.12 MB/s  
Verify speed: 267.75 MB/s

Real life wise :

I've set up a VM on the PVE with a 100G drive, and created a 70G file from /dev/urandom values.
Here's the backup result :

Code:
INFO: backup is sparse: 19.18 GiB (19%) total zero data

INFO: backup was done incrementally, reused 19.21 GiB (19%)

INFO: transferred 100.00 GiB in 284 seconds (360.6 MiB/s)

Restore wise :
Code:
restore image complete (bytes=107374182400, duration=519.92s, speed=196.95MB/s)


Now, I've yet to properly play with resilvering and disk replacement. Maybe 16*4Tb will take forever to rebuild.

In any case, interested in y'all "real life" values for backups. The 200MB/s restore seems a bit underwhelming for a full on flash PBS, but there might be something else at play here.

Best Regards,

Taledo
Thought the recommendation of pbs for zfs is to use mirrors.
We run an AMD EPYC 9124 16-Core with 384 GB RAM and 15 nvme (7.68 TB Read Intensive) organized in 7 mirrors and one spare.
No optimization of zfs, especially no compression or dedupe (dedupe kills it, really).

running proxmox-backup-client on the pbs itself says:
Code:
$ proxmox-backup-client benchmark
SHA256 speed: 1821.54 MB/s
Compression speed: 547.58 MB/s
Decompress speed: 720.06 MB/s
AES256/GCM speed: 4063.94 MB/s
Verify speed: 515.17 MB/s
┌───────────────────────────────────┬─────────────────────┐
│ Name                              │ Value               │
╞═══════════════════════════════════╪═════════════════════╡
│ TLS (maximal backup upload speed) │ not tested          │
├───────────────────────────────────┼─────────────────────┤
│ SHA256 checksum computation speed │ 1821.54 MB/s (90%)  │
├───────────────────────────────────┼─────────────────────┤
│ ZStd level 1 compression speed    │ 547.58 MB/s (73%)   │
├───────────────────────────────────┼─────────────────────┤
│ ZStd level 1 decompression speed  │ 720.06 MB/s (60%)   │
├───────────────────────────────────┼─────────────────────┤
│ Chunk verification speed          │ 515.17 MB/s (68%)   │
├───────────────────────────────────┼─────────────────────┤
│ AES256 GCM encryption speed       │ 4063.94 MB/s (111%) │
└───────────────────────────────────┴─────────────────────┘
We're backing up our pve-vms over 10 GbE and 3 kilometers over single mode.
The server retrieves over https so there is some overhead. We achieve a max of around 500 Megabytes per second from 9 pve nodes having their storage on a proxmox ceph.

Regards, Urs
 
Hopefully understand our backup and restore ... as as long as there is no problem on primary fileserver we backup and restore with 16 GB/s any vm disk (which is just creating new metadata for the image file).
And if ... the primary fileserver has a big hw problem we even don't restore to it as it's unavailable that time and we switch storage in datacenter to other, so the 550/460 MB/s values are academic and never used but just measured.
 
Last edited: