PBS with NVME storage way too slow? How to optimize?

Mar 14, 2026
6
0
1
Hello,
I've installed the current PBS on a dedicated server in a datacenter (OVH).

AMD EPYC 9135 16-Core Processor (1 Socket)
128 GB RAM
2x 1TB NVME SSD mirror for OS/PBS
6x 8TB NVME SSD (Samsung MZWL67T6HBLC-00AW7)
LAN is 25GBit
Installed native from ISO

I created a ZFS with the 6 SSDs with striping and RAIDZ.
I need abount 40 TB of capacity. So I can' use mirror.

https://semiconductor.samsung.com/ssd/datacenter-ssd/pm9d3a/mzwl67t6hblc-00aw7-bw7/
PCIe 5.0x4, 6800 MB/s write, 12000 MB/s read, 2000K IOPS random read

I create Backups from 3 PM VE Servers.
There is a Windows VM with mailstore. There is one disc 5 TB size.
There is no way to use a more parallel approach.

When I verify localy it or restore it I can't get past 800 MB/Sek.
CPU is about 2%
The disc should reach 3 GB/Sek easy

2026-03-14T17:17:10+00:00: verify data0:vm/100707102/2026-03-12T01:03:51Z
2026-03-14T17:17:10+00:00: check qemu-server.conf.blob
2026-03-14T17:17:10+00:00: check drive-tpmstate0-backup.img.fidx
2026-03-14T17:17:10+00:00: verified 0.01/4.00 MiB in 0.00 seconds, speed 9.53/5766.13 MiB/s (0 errors)
2026-03-14T17:17:10+00:00: check drive-scsi2.img.fidx
2026-03-14T17:18:25+00:00: verified 58744.30/59388.00 MiB in 74.76 seconds, speed 785.72/794.33 MiB/s (0 errors)
2026-03-14T17:18:25+00:00: check drive-scsi0.img.fidx

I get 400 MB/Sek with 24 TB HDDs in Raid5.

I know pbs works with many small chunks, for deduplication and these needs to be put togehter again.

What am I doing wrong?
Or what need I to do to increaese restore speed?

I've described it here in german: https://administrator.de/forum/proxmox-pbs-storage-performance-677340.html

Thanks
Stefan
 
Update

If I tune verify, I get 5.1 GBit/s and more
proxmox-backup-manager datastore update data0 \ --tuning 'default-verification-readers=16,default-verification-workers=32'

So PBS, the hardware and configuration can deliver this speed.
But restore can't?

When I see this from 2023 I ask myself why restore has not been updated as well.

Stefan
 
Last edited:
PVE version, as restore is PVE side.
PVE 8.4 require updates to allow "Restoring from Proxmox Backup Server now fetches multiple chunks concurrently in each worker thread.
This can improve the restore performance in setups with fast network connections and disks.
The number of concurrently fetched chunks and worker threads can be customized using environment variables"
 
Hello, PMVE is 9.1.5.
I tried md striping with ext4.
Currently I can restore with about 1.4 Gbyte/Sec.
Better than the 800 MB with zfs striping or zfsraid1, but slower than the 2x25GBit possible.

The discs and lan should deliver 3GByte/Sec.
CPU von PBS is 5%, RAM uses 5 of 128GB, Swap is 84KB.

During Verify I get 6 GByte/Sek.
Backup is up to 3GByte/Sek with 2 PMVEs sending 5TB new data.

iPerf3 shows about 22GBit LAN usable.

Are there any other options than these?
default-verification-readers=16,default-verification-workers=32


Why is there no autotune or benchmark build in?
 
Last edited:
Feature request

This is the information during backup
INFO: 64% (3.3 TiB of 5.1 TiB) in 46m 23s, read: 1.2 GiB/s, write: 1.2 GiB/s

How abount an information what the current bottleneck is? storage, network, pbs
 
For the Restore process, you can tune settings :
 
  • Like
Reactions: AlexHK
what does "proxmox-backup-client benchmark --repository <...>" say if you run it on one of the PVE hosts? at what speed does a single backup task peak?

might be that the actual transfer over HTTP 2.0/TLS with a single TCP connection is your bottle neck
 
Hello Fabian,

Specs for the server are above.
The system is capable to reach 2.5-3GByte/Sek for backup and restore.

Code:
root@ns31872233:~# proxmox-backup-client benchmark --repository 'eu02@pbs@10.2.0.103:data0'
Password for "eu02@pbs": ********************************
Uploaded 2555 chunks in 5 seconds.
Time per request: 1960 microseconds.
TLS speed: 2139.65 MB/s
SHA256 speed: 2664.45 MB/s
Compression speed: 1213.59 MB/s
Decompress speed: 1856.75 MB/s
AES256/GCM speed: 20907.72 MB/s
Verify speed: 1068.22 MB/s
┌───────────────────────────────────┬──────────────────────┐
│ Name                              │ Value                │
╞═══════════════════════════════════╪══════════════════════╡
│ TLS (maximal backup upload speed) │ 2139.65 MB/s (173%)  │
├───────────────────────────────────┼──────────────────────┤
│ SHA256 checksum computation speed │ 2664.45 MB/s (132%)  │
├───────────────────────────────────┼──────────────────────┤
│ ZStd level 1 compression speed    │ 1213.59 MB/s (161%)  │
├───────────────────────────────────┼──────────────────────┤
│ ZStd level 1 decompression speed  │ 1856.75 MB/s (155%)  │
├───────────────────────────────────┼──────────────────────┤
│ Chunk verification speed          │ 1068.22 MB/s (141%)  │
├───────────────────────────────────┼──────────────────────┤
│ AES256 GCM encryption speed       │ 20907.72 MB/s (574%) │
└───────────────────────────────────┴──────────────────────┘

/data0 is the ZFS Storage for the VMs
Local speed

root@ns31872233:/data0# time dd if=/dev/zero of=/data0/testfile.bin bs=1G count=8 oflag=direct status=progress
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 1.36662 s, 6.3 GB/s

root@ns31872233:/data0# time dd if=/data0/testfile.bin of=/dev/null bs=1G iflag=direct status=progress
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 0.903823 s, 9.5 GB/s


This is the boot drive (2x 1TB NVME SSD)
Should not matter?

root@ns31872233:/data0# time dd if=/dev/zero of=/root/testfile.bin bs=1G count=8 oflag=direct status=progress
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 6.26043 s, 1.4 GB/s

root@ns31872233:/data0# time dd if=/root/testfile.bin of=/dev/null bs=1G iflag=direct status=progress
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 0.727295 s, 11.8 GB/s