PBS with NVME storage way too slow? How to optimize?

Hello,

here local at the pbs itself.

Same as per network.
I tried fio with an random 4M approach...

root@ns3265905:~# time proxmox-backup-client restore 'vm/100606102/2026-03-16T01:00:01Z' 'drive-scsi0.img.fidx' - --repository "$PBS_REPOSITORY" --ns eu0202 --keyfile /root/x1 > /dev/null
Using encryption key from '/root/x1'..
Fingerprint: 82:2e:d7:8f:be:b4:d9:17
restore image complete (bytes=107374182400, duration=118.48s, speed=864.28MB/s)

real 1m58.506s
user 1m1.365s
sys 0m40.634s

avg-cpu: %user %nice %system %iowait %steal %idle
1.89 0.00 1.67 0.05 0.00 96.38

Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
md10 2293.50 456.91 0.00 0.00 0.14 204.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.31 2.85
md2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
md3 0.00 0.00 0.00 0.00 0.00 0.00 2.50 0.01 0.00 0.00 0.00 4.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme0n1 365.50 78.12 30.50 7.70 0.16 218.88 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.06 2.25
nvme1n1 355.50 76.67 33.00 8.49 0.16 220.83 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.06 2.25
nvme2n1 346.50 75.95 28.00 7.48 0.16 224.44 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.06 2.30
nvme3n1 349.00 75.66 27.00 7.18 0.16 222.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.06 2.40
nvme4n1 357.00 76.96 30.00 7.75 0.16 220.73 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.06 2.30
nvme5n1 0.00 0.00 0.00 0.00 0.00 0.00 3.50 0.02 1.50 30.00 0.00 4.71 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme6n1 0.00 0.00 0.00 0.00 0.00 0.00 3.50 0.02 1.50 30.00 0.00 4.71 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme7n1 341.50 73.55 30.00 8.08 0.16 220.54 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.06 2.30









root@ns3265905:~# fio --name=seq-read --directory=/data0 --rw=read --bs=4M --size=100G --numjobs=1 --direct=1 --runtime=60 --group_reporting
seq-read: (g=0): rw=read, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=psync, iodepth=1
fio-3.39
Starting 1 process
seq-read: Laying out IO file (1 file / 102400MiB)
Jobs: 1 (f=1): [R(1)][100.0%][r=17.2GiB/s][r=4394 IOPS][eta 00m:00s]
seq-read: (groupid=0, jobs=1): err= 0: pid=48317: Mon Mar 16 20:58:10 2026
read: IOPS=4400, BW=17.2GiB/s (18.5GB/s)(100GiB/5818msec)
clat (usec): min=194, max=1423, avg=227.06, stdev=36.23
lat (usec): min=194, max=1423, avg=227.08, stdev=36.23
clat percentiles (usec):
| 1.00th=[ 198], 5.00th=[ 200], 10.00th=[ 202], 20.00th=[ 204],
| 30.00th=[ 206], 40.00th=[ 206], 50.00th=[ 217], 60.00th=[ 221],
| 70.00th=[ 231], 80.00th=[ 245], 90.00th=[ 281], 95.00th=[ 289],
| 99.00th=[ 343], 99.50th=[ 367], 99.90th=[ 416], 99.95th=[ 445],
| 99.99th=[ 1385]
bw ( MiB/s): min=16608, max=18304, per=100.00%, avg=17611.64, stdev=498.68, samples=11
iops : min= 4152, max= 4576, avg=4402.91, stdev=124.67, samples=11
lat (usec) : 250=81.22%, 500=18.75%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.02%
cpu : usr=0.17%, sys=31.36%, ctx=25602, majf=0, minf=1034
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=25600,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
READ: bw=17.2GiB/s (18.5GB/s), 17.2GiB/s-17.2GiB/s (18.5GB/s-18.5GB/s), io=100GiB (107GB), run=5818-5818msec

Disk stats (read/write):
md10: ios=482961/0, sectors=208232448/0, merge=0/0, ticks=64960/0, in_queue=64960, util=91.86%, aggrios=81066/0, aggsectors=34952533/0, aggrmerge=0/0, aggrticks=12818/0, aggrin_queue=12818, aggrutil=76.37%
nvme0n1: ios=93870/0, sectors=34954240/0, merge=0/0, ticks=14755/0, in_queue=14756, util=75.78%
nvme3n1: ios=93890/0, sectors=34964480/0, merge=0/0, ticks=14563/0, in_queue=14563, util=75.86%
nvme2n1: ios=93840/0, sectors=34938880/0, merge=0/0, ticks=14664/0, in_queue=14664, util=76.37%
nvme1n1: ios=68240/0, sectors=34938880/0, merge=0/0, ticks=10872/0, in_queue=10872, util=70.75%
nvme4n1: ios=68270/0, sectors=34954240/0, merge=0/0, ticks=10985/0, in_queue=10985, util=71.06%
nvme7n1: ios=68290/0, sectors=34964480/0, merge=0/0, ticks=11072/0, in_queue=11071, util=71.73%



avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 0.92 1.90 0.00 97.18

Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
md10 82856.00 17443.47 0.00 0.00 0.14 215.58 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11.36 92.70
md2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
md3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme0n1 15982.50 2905.38 0.00 0.00 0.16 186.15 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.52 79.20
nvme1n1 11630.00 2907.38 0.00 0.00 0.16 255.99 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.87 72.60
nvme2n1 15990.50 2907.48 0.00 0.00 0.16 186.19 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.49 77.70
nvme3n1 15996.50 2908.99 0.00 0.00 0.16 186.22 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.49 78.65
nvme4n1 11621.50 2905.38 0.00 0.00 0.16 256.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.85 72.55
nvme5n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme6n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme7n1 11635.00 2908.75 0.00 0.00 0.16 256.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.88 73.20
 
one last thing to try, although it seems unlikely based on your benchmark numbers, would be to retry the test with an unencrypted backup (you can use test data for this of course, instead of real production data).