Hello Proxmox-Comunity!
We are testing the PBS for (maybe) our next backup solution of PVE. Currently PBS is virtualized on the PVE-Cluster. The Datastore is an external Cifs Share (for now).
The whole Cluster is connected with 20GBit to the CIFS-Share and between the nodes.
When benchmarking the PBS with the cifs-share as the repository we are getting the following results:
Note that this benchmark was made while there were running backup-jobs (f.ex. like the one a bit lower posted in this post)
So it should be theoretically possible to backup with up to ~270 MB/s - maybe 220 MB/s.
The fact is - sometimes we are having a performance which is acceptable at about ~50 MB/s in average.
But most of the time the performance is like this:
The source (where the vm-disk is on) is an performant SSD-Storage (about 1GB/s read and write).
The PBS's hardware is:
* memory: 64GB
* CPU 31 Cores
* disk: 64GB - ssd-performance
Is there a way to figure out why the backup read and write speeds are sometimes that low?
Is there a way to figure out where the bottleneck is (PVE-->PBS-->CIFS-Share)?
best regards,
Flo
We are testing the PBS for (maybe) our next backup solution of PVE. Currently PBS is virtualized on the PVE-Cluster. The Datastore is an external Cifs Share (for now).
The whole Cluster is connected with 20GBit to the CIFS-Share and between the nodes.
When benchmarking the PBS with the cifs-share as the repository we are getting the following results:
Code:
Uploaded 334 chunks in 5 seconds.
Time per request: 15118 microseconds.
TLS speed: 277.43 MB/s
SHA256 speed: 314.79 MB/s
Compression speed: 578.94 MB/s
Decompress speed: 897.54 MB/s
AES256/GCM speed: 413.03 MB/s
Verify speed: 228.35 MB/s
┌───────────────────────────────────┬───────────────────┐
│ Name │ Value │
╞═══════════════════════════════════╪═══════════════════╡
│ TLS (maximal backup upload speed) │ 277.43 MB/s (22%) │
├───────────────────────────────────┼───────────────────┤
│ SHA256 checksum computation speed │ 314.79 MB/s (16%) │
├───────────────────────────────────┼───────────────────┤
│ ZStd level 1 compression speed │ 578.94 MB/s (77%) │
├───────────────────────────────────┼───────────────────┤
│ ZStd level 1 decompression speed │ 897.54 MB/s (75%) │
├───────────────────────────────────┼───────────────────┤
│ Chunk verification speed │ 228.35 MB/s (30%) │
├───────────────────────────────────┼───────────────────┤
│ AES256 GCM encryption speed │ 413.03 MB/s (11%) │
└───────────────────────────────────┴───────────────────┘
So it should be theoretically possible to backup with up to ~270 MB/s - maybe 220 MB/s.
The fact is - sometimes we are having a performance which is acceptable at about ~50 MB/s in average.
But most of the time the performance is like this:
Code:
INFO: starting new backup job: vzdump 224 --storage PBS --node [nodename] --mode snapshot --remove 0
INFO: Starting Backup of VM 224 (qemu)
INFO: Backup started at 2020-11-06 08:13:42
INFO: status = running
INFO: VM Name: [vm-name]
INFO: include disk 'virtio0' 'SSD-Storage2:vm-224-disk-1' 128G
INFO: include disk 'virtio1' 'SSD-Storage2:vm-224-disk-0' 1T
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/224/2020-11-06T07:13:42Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: enabling encryption
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task 'f98aa810-cc19-4946-bb45-bdb452af1282'
INFO: resuming VM again
INFO: virtio0: dirty-bitmap status: created new
INFO: virtio1: dirty-bitmap status: created new
INFO: 0% (404.0 MiB of 1.1 TiB) in 3s, read: 134.7 MiB/s, write: 97.3 MiB/s
INFO: 1% (11.5 GiB of 1.1 TiB) in 21m 14s, read: 9.0 MiB/s, write: 9.0 MiB/s
INFO: 2% (23.0 GiB of 1.1 TiB) in 42m 30s, read: 9.2 MiB/s, write: 6.6 MiB/s
INFO: 3% (34.6 GiB of 1.1 TiB) in 1h 18m 56s, read: 5.4 MiB/s, write: 5.4 MiB/s
INFO: 4% (46.1 GiB of 1.1 TiB) in 1h 40m 5s, read: 9.3 MiB/s, write: 8.0 MiB/s
INFO: 5% (57.6 GiB of 1.1 TiB) in 2h 21m 7s, read: 4.8 MiB/s, write: 4.8 MiB/s
INFO: 6% (69.1 GiB of 1.1 TiB) in 2h 49m 57s, read: 6.8 MiB/s, write: 6.8 MiB/s
INFO: 7% (80.6 GiB of 1.1 TiB) in 3h 22m 54s, read: 6.0 MiB/s, write: 5.9 MiB/s
INFO: 8% (92.3 GiB of 1.1 TiB) in 3h 54m 28s, read: 6.3 MiB/s, write: 5.6 MiB/s
INFO: 9% (103.7 GiB of 1.1 TiB) in 4h 12m 57s, read: 10.5 MiB/s, write: 4.8 MiB/s
The source (where the vm-disk is on) is an performant SSD-Storage (about 1GB/s read and write).
The PBS's hardware is:
* memory: 64GB
* CPU 31 Cores
* disk: 64GB - ssd-performance
Is there a way to figure out why the backup read and write speeds are sometimes that low?
Is there a way to figure out where the bottleneck is (PVE-->PBS-->CIFS-Share)?
best regards,
Flo
Last edited: