pbs and cpu performance

phs

Renowned Member
Dec 3, 2015
38
3
73
hi,

im running pbs with dual 2660 v3, it seems that each backup task from pve 6.4 can achieve about 130-180MB/s,
multiple tasks at the same time adds up, so i dont think it is storage bottleneck.
pbs seems to handle poorly in smp, by this i have been thinking to swap cpu to 2643 v4
https://www.cpubenchmark.net/compare/Intel-Xeon-E5-2660-v3-vs-Intel-Xeon-E5-2643-v4/2359vs2811
which in theory should boost single thread by up to 25%

what do you think? is QAT an option?

┌───────────────────────────────────┬────────────────────┐
│ Name │ Value │
╞═══════════════════════════════════╪════════════════════╡
│ TLS (maximal backup upload speed) │ 780.14 MB/s (63%) │
├───────────────────────────────────┼────────────────────┤
│ SHA256 checksum computation speed │ 422.06 MB/s (21%) │
├───────────────────────────────────┼────────────────────┤
│ ZStd level 1 compression speed │ 438.96 MB/s (58%) │
├───────────────────────────────────┼────────────────────┤
│ ZStd level 1 decompression speed │ 690.73 MB/s (58%) │
├───────────────────────────────────┼────────────────────┤
│ Chunk verification speed │ 261.08 MB/s (34%) │
├───────────────────────────────────┼────────────────────┤
│ AES256 GCM encryption speed │ 1361.06 MB/s (37%) │
└───────────────────────────────────┴────────────────────┘

proxmox-backup: 2.2-1 (running kernel: 5.15.39-2-pve) proxmox-backup-server: 2.2.5-1 (running version: 2.2.5) pve-kernel-5.15: 7.2-7 pve-kernel-helper: 7.2-7 pve-kernel-5.15.39-2-pve: 5.15.39-2 pve-kernel-5.15.39-1-pve: 5.15.39-1 pve-kernel-5.15.35-1-pve: 5.15.35-3 ifupdown2: 3.1.0-1+pmx3 libjs-extjs: 7.0.0-1 proxmox-backup-docs: 2.2.5-1 proxmox-backup-client: 2.2.5-1 proxmox-mini-journalreader: 1.2-1 proxmox-widget-toolkit: 3.5.1 pve-xtermjs: 4.16.0-1 smartmontools: 7.2-pve3 zfsutils-linux: 2.1.5-pve1

cheers
phil
 
Last edited:
im running pbs with dual 2660 v3, it seems that each backup task from pve 6.4 can achieve about 130-180MB/s,
multiple tasks at the same time adds up, so i dont think it is storage bottleneck.
note that on backup most of the cpu load is done on the client (so on the pve in your case) not on the pbs server

can you also do a proxmox-backup-client benchmark from the pve client?
 
# proxmox-backup-client benchmark --repository SNIP
SNIP
Uploaded 799 chunks in 5 seconds.
Time per request: 6281 microseconds.
TLS speed: 667.72 MB/s
SHA256 speed: 486.35 MB/s
Compression speed: 604.83 MB/s
Decompress speed: 912.70 MB/s
AES256/GCM speed: 2291.27 MB/s
Verify speed: 331.60 MB/s
┌───────────────────────────────────┬────────────────────┐
│ Name │ Value │
╞═══════════════════════════════════╪════════════════════╡
│ TLS (maximal backup upload speed) │ 667.72 MB/s (54%) │
├───────────────────────────────────┼────────────────────┤
│ SHA256 checksum computation speed │ 486.35 MB/s (24%) │
├───────────────────────────────────┼────────────────────┤
│ ZStd level 1 compression speed │ 604.83 MB/s (80%) │
├───────────────────────────────────┼────────────────────┤
│ ZStd level 1 decompression speed │ 912.70 MB/s (76%) │
├───────────────────────────────────┼────────────────────┤
│ Chunk verification speed │ 331.60 MB/s (44%) │
├───────────────────────────────────┼────────────────────┤
│ AES256 GCM encryption speed │ 2291.27 MB/s (63%) │
└───────────────────────────────────┴────────────────────┘

# proxmox-backup-client version
client version: 1.1.10

running on xeon 6146



another system running on xeon 2680v4:
# proxmox-backup-client benchmark --repository SNIP
SNIP
Uploaded 812 chunks in 5 seconds.
Time per request: 6178 microseconds.
TLS speed: 678.87 MB/s
SHA256 speed: 373.46 MB/s
Compression speed: 372.68 MB/s
Decompress speed: 536.06 MB/s
AES256/GCM speed: 1190.41 MB/s
Verify speed: 218.19 MB/s
┌───────────────────────────────────┬────────────────────┐
│ Name │ Value │
╞═══════════════════════════════════╪════════════════════╡
│ TLS (maximal backup upload speed) │ 678.87 MB/s (55%) │
├───────────────────────────────────┼────────────────────┤
│ SHA256 checksum computation speed │ 373.46 MB/s (18%) │
├───────────────────────────────────┼────────────────────┤
│ ZStd level 1 compression speed │ 372.68 MB/s (50%) │
├───────────────────────────────────┼────────────────────┤
│ ZStd level 1 decompression speed │ 536.06 MB/s (45%) │
├───────────────────────────────────┼────────────────────┤
│ Chunk verification speed │ 218.19 MB/s (29%) │
├───────────────────────────────────┼────────────────────┤
│ AES256 GCM encryption speed │ 1190.41 MB/s (33%) │
└───────────────────────────────────┴────────────────────┘

# proxmox-backup-client version
client version: 2.2.1


im not exactly conviced the client is the issue here..
 
Last edited:
im not exactly conviced the client is the issue here..

in general there are a few choke points that can happen during backup:
* source storage
* client cpu (seems not to be the bottleneck)
* network (according to the benchmark is also ok)
* pbs cpu (seems also not to be the bottleneck according to the benchmark)
* pbs target storage

the fact that the overall bandwidth increases with multiple backups, indicates a cpu or network bottleneck (a single backup does not use all cores of either the client/target machine; neither does it use multiple tcp connections)
so i'd check those first (e.g. with iperf/fio/etc. )
 
1660032420465.png


not sure how to test zfs in such configuration, using hdd with special dev, it handle easily multiple backup tasks at once
1660032384834.png
 
mhmm.. can you post a task log from a single backup (ideally both the client (pve) and the server (pbs) task)
 
client, yes backup is limited to about 512mb/s (zfs mirror on 2x intel dc4510), this is the xeon 6146 host with older PVE

Code:
NFO: Starting Backup of VM 3457 (qemu)
INFO: Backup started at 2022-08-09 04:23:19
INFO: status = running
INFO: VM Name: SNIP
INFO: include disk 'scsi0' 'zfs-local:vm-3457-disk-0' 1200G
INFO: backup mode: snapshot
INFO: bandwidth limit: 524288 KB/s
INFO: ionice priority: 7
INFO: skip unused drive 'zfs-local:vm-3457-disk-1' (not included into backup)
INFO: creating Proxmox Backup Server archive 'vm/3457/2022-08-09T02:23:19Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '9103a902-55a0-40c2-ab2e-ffb5a4d182d9'
INFO: resuming VM again
INFO: scsi0: dirty-bitmap status: OK (130.9 GiB of 1.2 TiB dirty)
INFO: using fast incremental mode (dirty-bitmap), 130.9 GiB dirty of 1.2 TiB total
INFO:   0% (464.0 MiB of 130.9 GiB) in 3s, read: 154.7 MiB/s, write: 154.7 MiB/s
INFO:   1% (1.4 GiB of 130.9 GiB) in 10s, read: 140.6 MiB/s, write: 140.6 MiB/s
INFO:   2% (2.8 GiB of 130.9 GiB) in 19s, read: 152.0 MiB/s, write: 152.0 MiB/s
INFO:   3% (4.0 GiB of 130.9 GiB) in 27s, read: 162.5 MiB/s, write: 162.0 MiB/s
INFO:   4% (5.3 GiB of 130.9 GiB) in 36s, read: 142.2 MiB/s, write: 142.2 MiB/s
INFO:   5% (6.6 GiB of 130.9 GiB) in 46s, read: 137.2 MiB/s, write: 137.2 MiB/s
INFO:   6% (7.9 GiB of 130.9 GiB) in 56s, read: 132.0 MiB/s, write: 132.0 MiB/s
INFO:   7% (9.3 GiB of 130.9 GiB) in 1m 5s, read: 162.7 MiB/s, write: 162.7 MiB/s
INFO:   8% (10.5 GiB of 130.9 GiB) in 1m 13s, read: 151.0 MiB/s, write: 151.0 MiB/s
INFO:   9% (11.8 GiB of 130.9 GiB) in 1m 22s, read: 146.2 MiB/s, write: 146.2 MiB/s
INFO:  10% (13.2 GiB of 130.9 GiB) in 1m 32s, read: 141.6 MiB/s, write: 141.6 MiB/s
INFO:  11% (14.4 GiB of 130.9 GiB) in 1m 40s, read: 157.5 MiB/s, write: 157.5 MiB/s
INFO:  12% (15.7 GiB of 130.9 GiB) in 1m 49s, read: 149.8 MiB/s, write: 149.8 MiB/s
INFO:  13% (17.2 GiB of 130.9 GiB) in 1m 58s, read: 164.4 MiB/s, write: 164.4 MiB/s
INFO:  14% (18.3 GiB of 130.9 GiB) in 2m 5s, read: 172.0 MiB/s, write: 172.0 MiB/s
INFO:  15% (19.7 GiB of 130.9 GiB) in 2m 15s, read: 139.6 MiB/s, write: 139.6 MiB/s
INFO:  16% (21.0 GiB of 130.9 GiB) in 2m 24s, read: 146.7 MiB/s, write: 146.7 MiB/s
INFO:  17% (22.4 GiB of 130.9 GiB) in 2m 32s, read: 175.0 MiB/s, write: 175.0 MiB/s
INFO:  18% (23.7 GiB of 130.9 GiB) in 2m 40s, read: 167.0 MiB/s, write: 167.0 MiB/s
INFO:  19% (25.0 GiB of 130.9 GiB) in 2m 48s, read: 169.5 MiB/s, write: 169.5 MiB/s
INFO:  20% (26.2 GiB of 130.9 GiB) in 2m 56s, read: 154.5 MiB/s, write: 154.5 MiB/s
INFO:  21% (27.6 GiB of 130.9 GiB) in 3m 5s, read: 159.6 MiB/s, write: 159.6 MiB/s
INFO:  22% (28.8 GiB of 130.9 GiB) in 3m 14s, read: 138.2 MiB/s, write: 138.2 MiB/s
INFO:  23% (30.1 GiB of 130.9 GiB) in 3m 24s, read: 134.4 MiB/s, write: 134.4 MiB/s
INFO:  24% (31.4 GiB of 130.9 GiB) in 3m 33s, read: 148.9 MiB/s, write: 148.9 MiB/s
INFO:  25% (32.8 GiB of 130.9 GiB) in 3m 42s, read: 156.0 MiB/s, write: 156.0 MiB/s
INFO:  26% (34.1 GiB of 130.9 GiB) in 3m 50s, read: 159.5 MiB/s, write: 159.5 MiB/s
INFO:  27% (35.4 GiB of 130.9 GiB) in 4m, read: 136.4 MiB/s, write: 136.4 MiB/s
INFO:  28% (36.7 GiB of 130.9 GiB) in 4m 10s, read: 134.8 MiB/s, write: 134.8 MiB/s
INFO:  29% (38.1 GiB of 130.9 GiB) in 4m 20s, read: 138.8 MiB/s, write: 138.8 MiB/s
INFO:  30% (39.3 GiB of 130.9 GiB) in 4m 29s, read: 139.6 MiB/s, write: 139.6 MiB/s
INFO:  31% (40.7 GiB of 130.9 GiB) in 4m 39s, read: 142.8 MiB/s, write: 142.8 MiB/s
INFO:  32% (41.9 GiB of 130.9 GiB) in 4m 48s, read: 141.3 MiB/s, write: 141.3 MiB/s
INFO:  33% (43.3 GiB of 130.9 GiB) in 4m 58s, read: 141.6 MiB/s, write: 141.6 MiB/s
INFO:  34% (44.5 GiB of 130.9 GiB) in 5m 7s, read: 137.8 MiB/s, write: 137.8 MiB/s
INFO:  35% (45.9 GiB of 130.9 GiB) in 5m 18s, read: 131.6 MiB/s, write: 131.6 MiB/s
INFO:  36% (47.1 GiB of 130.9 GiB) in 5m 27s, read: 136.9 MiB/s, write: 136.9 MiB/s
INFO:  37% (48.6 GiB of 130.9 GiB) in 5m 36s, read: 164.0 MiB/s, write: 164.0 MiB/s
INFO:  38% (49.8 GiB of 130.9 GiB) in 5m 44s, read: 152.0 MiB/s, write: 152.0 MiB/s
INFO:  39% (51.1 GiB of 130.9 GiB) in 5m 54s, read: 132.4 MiB/s, write: 132.4 MiB/s
INFO:  40% (52.5 GiB of 130.9 GiB) in 6m 4s, read: 147.6 MiB/s, write: 147.6 MiB/s
INFO:  41% (53.7 GiB of 130.9 GiB) in 6m 13s, read: 137.3 MiB/s, write: 137.3 MiB/s
INFO:  42% (55.0 GiB of 130.9 GiB) in 6m 22s, read: 146.2 MiB/s, write: 146.2 MiB/s
INFO:  43% (56.3 GiB of 130.9 GiB) in 6m 32s, read: 138.4 MiB/s, write: 138.4 MiB/s
INFO:  44% (57.7 GiB of 130.9 GiB) in 6m 42s, read: 140.8 MiB/s, write: 140.8 MiB/s
INFO:  45% (58.9 GiB of 130.9 GiB) in 6m 51s, read: 137.3 MiB/s, write: 137.3 MiB/s
INFO:  46% (60.2 GiB of 130.9 GiB) in 7m, read: 150.7 MiB/s, write: 150.7 MiB/s
INFO:  47% (61.6 GiB of 130.9 GiB) in 7m 10s, read: 134.8 MiB/s, write: 134.8 MiB/s
INFO:  48% (62.9 GiB of 130.9 GiB) in 7m 19s, read: 150.7 MiB/s, write: 150.7 MiB/s
INFO:  49% (64.2 GiB of 130.9 GiB) in 7m 28s, read: 149.3 MiB/s, write: 149.3 MiB/s
INFO:  50% (65.5 GiB of 130.9 GiB) in 7m 36s, read: 161.5 MiB/s, write: 161.5 MiB/s
INFO:  51% (66.8 GiB of 130.9 GiB) in 7m 44s, read: 171.0 MiB/s, write: 171.0 MiB/s
INFO:  52% (68.1 GiB of 130.9 GiB) in 7m 52s, read: 167.5 MiB/s, write: 167.5 MiB/s
INFO:  53% (69.4 GiB of 130.9 GiB) in 8m 1s, read: 146.7 MiB/s, write: 146.7 MiB/s
INFO:  54% (70.8 GiB of 130.9 GiB) in 8m 9s, read: 178.0 MiB/s, write: 178.0 MiB/s
INFO:  55% (72.0 GiB of 130.9 GiB) in 8m 17s, read: 160.5 MiB/s, write: 160.5 MiB/s
INFO:  56% (73.3 GiB of 130.9 GiB) in 8m 25s, read: 164.5 MiB/s, write: 164.5 MiB/s
INFO:  57% (74.7 GiB of 130.9 GiB) in 8m 34s, read: 157.3 MiB/s, write: 157.3 MiB/s
INFO:  58% (75.9 GiB of 130.9 GiB) in 8m 42s, read: 156.5 MiB/s, write: 156.5 MiB/s
INFO:  59% (77.3 GiB of 130.9 GiB) in 8m 52s, read: 142.8 MiB/s, write: 142.8 MiB/s
INFO:  60% (78.6 GiB of 130.9 GiB) in 9m 2s, read: 132.0 MiB/s, write: 132.0 MiB/s
INFO:  61% (79.9 GiB of 130.9 GiB) in 9m 12s, read: 129.6 MiB/s, write: 129.6 MiB/s
INFO:  62% (81.2 GiB of 130.9 GiB) in 9m 21s, read: 152.4 MiB/s, write: 152.4 MiB/s
INFO:  63% (82.6 GiB of 130.9 GiB) in 9m 30s, read: 152.4 MiB/s, write: 152.4 MiB/s
INFO:  64% (83.8 GiB of 130.9 GiB) in 9m 39s, read: 142.7 MiB/s, write: 142.7 MiB/s
INFO:  65% (85.2 GiB of 130.9 GiB) in 9m 48s, read: 155.6 MiB/s, write: 155.6 MiB/s
INFO:  66% (86.4 GiB of 130.9 GiB) in 9m 56s, read: 162.0 MiB/s, write: 162.0 MiB/s
INFO:  67% (87.9 GiB of 130.9 GiB) in 10m 5s, read: 161.3 MiB/s, write: 161.3 MiB/s
INFO:  68% (89.1 GiB of 130.9 GiB) in 10m 13s, read: 162.5 MiB/s, write: 162.5 MiB/s
INFO:  69% (90.4 GiB of 130.9 GiB) in 10m 21s, read: 166.5 MiB/s, write: 166.5 MiB/s
INFO:  70% (91.8 GiB of 130.9 GiB) in 10m 29s, read: 169.5 MiB/s, write: 169.5 MiB/s
INFO:  71% (93.0 GiB of 130.9 GiB) in 10m 37s, read: 153.0 MiB/s, write: 153.0 MiB/s
INFO:  72% (94.3 GiB of 130.9 GiB) in 10m 47s, read: 141.6 MiB/s, write: 141.6 MiB/s
INFO:  73% (95.6 GiB of 130.9 GiB) in 10m 55s, read: 166.0 MiB/s, write: 166.0 MiB/s
INFO:  74% (96.9 GiB of 130.9 GiB) in 11m 3s, read: 158.5 MiB/s, write: 158.5 MiB/s
INFO:  75% (98.3 GiB of 130.9 GiB) in 11m 13s, read: 144.0 MiB/s, write: 144.0 MiB/s
INFO:  76% (99.6 GiB of 130.9 GiB) in 11m 21s, read: 164.0 MiB/s, write: 164.0 MiB/s
INFO:  77% (100.9 GiB of 130.9 GiB) in 11m 29s, read: 175.0 MiB/s, write: 175.0 MiB/s
INFO:  78% (102.2 GiB of 130.9 GiB) in 11m 37s, read: 159.0 MiB/s, write: 159.0 MiB/s
INFO:  79% (103.5 GiB of 130.9 GiB) in 11m 47s, read: 132.8 MiB/s, write: 132.8 MiB/s
INFO:  80% (104.8 GiB of 130.9 GiB) in 11m 56s, read: 152.9 MiB/s, write: 152.9 MiB/s
INFO:  81% (106.1 GiB of 130.9 GiB) in 12m 5s, read: 147.6 MiB/s, write: 147.6 MiB/s
INFO:  82% (107.4 GiB of 130.9 GiB) in 12m 14s, read: 146.7 MiB/s, write: 146.7 MiB/s
INFO:  83% (108.8 GiB of 130.9 GiB) in 12m 23s, read: 159.1 MiB/s, write: 159.1 MiB/s
INFO:  84% (110.0 GiB of 130.9 GiB) in 12m 31s, read: 151.0 MiB/s, write: 151.0 MiB/s
INFO:  85% (111.3 GiB of 130.9 GiB) in 12m 40s, read: 148.0 MiB/s, write: 148.0 MiB/s
INFO:  86% (112.7 GiB of 130.9 GiB) in 12m 50s, read: 141.6 MiB/s, write: 141.6 MiB/s
INFO:  87% (114.0 GiB of 130.9 GiB) in 12m 59s, read: 149.3 MiB/s, write: 149.3 MiB/s
INFO:  88% (115.3 GiB of 130.9 GiB) in 13m 8s, read: 148.4 MiB/s, write: 148.4 MiB/s
INFO:  89% (116.5 GiB of 130.9 GiB) in 13m 17s, read: 144.0 MiB/s, write: 144.0 MiB/s
INFO:  90% (117.8 GiB of 130.9 GiB) in 13m 26s, read: 146.2 MiB/s, write: 146.2 MiB/s
INFO:  91% (119.2 GiB of 130.9 GiB) in 13m 36s, read: 144.8 MiB/s, write: 144.8 MiB/s
INFO:  92% (120.5 GiB of 130.9 GiB) in 13m 45s, read: 148.4 MiB/s, write: 148.4 MiB/s
INFO:  93% (121.8 GiB of 130.9 GiB) in 13m 53s, read: 154.5 MiB/s, write: 154.5 MiB/s
INFO:  94% (123.1 GiB of 130.9 GiB) in 14m 2s, read: 150.2 MiB/s, write: 150.2 MiB/s
INFO:  95% (124.5 GiB of 130.9 GiB) in 14m 11s, read: 160.4 MiB/s, write: 160.4 MiB/s
INFO:  96% (125.7 GiB of 130.9 GiB) in 14m 19s, read: 156.5 MiB/s, write: 156.5 MiB/s
INFO:  97% (127.1 GiB of 130.9 GiB) in 14m 28s, read: 158.2 MiB/s, write: 158.2 MiB/s
INFO:  98% (128.3 GiB of 130.9 GiB) in 14m 36s, read: 159.0 MiB/s, write: 159.0 MiB/s
INFO:  99% (129.6 GiB of 130.9 GiB) in 14m 44s, read: 166.0 MiB/s, write: 166.0 MiB/s
INFO: 100% (130.9 GiB of 130.9 GiB) in 14m 54s, read: 130.4 MiB/s, write: 130.4 MiB/s
INFO: backup was done incrementally, reused 1.04 TiB (89%)
INFO: transferred 130.91 GiB in 894 seconds (149.9 MiB/s)
INFO: Finished Backup of VM 3457 (00:14:55)
INFO: Backup finished at 2022-08-09 04:38:14
Result: {
  "data": null
}

PBS:
Code:
2022-08-09T04:23:20+02:00: starting new backup on datastore 'SNIP': "vm/3457/2022-08-09T02:23:19Z"
2022-08-09T04:23:20+02:00: download 'index.json.blob' from previous backup.
2022-08-09T04:23:20+02:00: register chunks in 'drive-scsi0.img.fidx' from previous backup.
2022-08-09T04:23:20+02:00: download 'drive-scsi0.img.fidx' from previous backup.
2022-08-09T04:23:20+02:00: created new fixed index 1 ("vm/3457/2022-08-09T02:23:19Z/drive-scsi0.img.fidx")
2022-08-09T04:23:20+02:00: add blob "/data01/SNIP/vm/3457/2022-08-09T02:23:19Z/qemu-server.conf.blob" (308 bytes, comp: 308)
2022-08-09T04:38:14+02:00: Upload statistics for 'drive-scsi0.img.fidx'
2022-08-09T04:38:14+02:00: UUID: bf685439c8d14302bc97966de97efdad
2022-08-09T04:38:14+02:00: Checksum: 9eaad79ea4d200a68ce40be3dc64894aa0ca94bf1f41b42edd5ebc39e7da431d
2022-08-09T04:38:14+02:00: Size: 140559515648
2022-08-09T04:38:14+02:00: Chunk count: 33512
2022-08-09T04:38:14+02:00: Upload size: 140559515648 (100%)
2022-08-09T04:38:14+02:00: Duplicates: 0+2 (0%)
2022-08-09T04:38:14+02:00: Compression: 40%
2022-08-09T04:38:14+02:00: successfully closed fixed index 1
2022-08-09T04:38:14+02:00: add blob "/data01/SNIP/vm/3457/2022-08-09T02:23:19Z/index.json.blob" (325 bytes, comp: 325)
2022-08-09T04:38:14+02:00: successfully finished backup
2022-08-09T04:38:14+02:00: backup finished successfully
2022-08-09T04:38:14+02:00: TASK OK

PBS:
1660036455620.png
 
Last edited:
Something also to look at would be system IO. IPerf behaves differently than most network traffic where packets per second affect the Linux bridge and networking stack. You can increase the MTU by using Jumbo frames and also switch from Linux bridge to OVS and DDPK if your hardware nic supports it. This would bypass the in-kernel network stack and in most instances, this speeds up the networking and reduces the iowait on the cpu.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!