ProxMox Disk performance

Jul 2, 2025
5
0
1
Good afternoon,

I am testing this setup.
  • Server: Dell R450
  • CPU: 32 x Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz (1 Socket)
  • Controller: HBA355i
  • ProxMox OS Disk: 1x Disk: Synology SAT5210 920G
  • ZFS Pool: 4x Disk: Synology SAT5210 920G (Single Disk, RAIDZ, Mirror same results)

When I run this fio test on the OS disk:
cd /tmp/benchmark
fio --randrepeat=1 --ioengine=libaio --direct=1 --name=test --filename=test --bs=4k --size=16G --readwrite=readwrite --ramp_time=4

root@pve0:/tmp/benchmark# fio --randrepeat=1 --ioengine=libaio --direct=1 --name=test --filename=test --bs=4k --size=16G --readwrite=readwrite --ramp_time=4
test: (g=0): rw=rw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.39
Starting 1 process
test: Laying out IO file (1 file / 16384MiB)
Jobs: 1 (f=1): [M(1)][18.5%][r=80.4MiB/s,w=80.8MiB/s][r=20.6k,w=20.7k IOPS][eta 00m:22sJobs: 1 (f=1): [M(1)][46.2%][r=1408MiB/s,w=1405MiB/s][r=360k,w=360k IOPS][eta 00m:07s]
test: (groupid=0, jobs=1): err= 0: pid=22035: Fri Nov 28 16:21:17 2025
read: IOPS=359k, BW=1402MiB/s (1470MB/s)(2255MiB/1609msec)
slat (nsec): min=726, max=22619, avg=887.42, stdev=149.47
clat (nsec): min=299, max=23408, avg=312.00, stdev=64.47
lat (nsec): min=1035, max=24389, avg=1199.41, stdev=166.07
clat percentiles (nsec):
| 1.00th=[ 306], 5.00th=[ 306], 10.00th=[ 306], 20.00th=[ 310],
| 30.00th=[ 310], 40.00th=[ 310], 50.00th=[ 310], 60.00th=[ 310],
| 70.00th=[ 310], 80.00th=[ 314], 90.00th=[ 314], 95.00th=[ 318],
| 99.00th=[ 350], 99.50th=[ 370], 99.90th=[ 486], 99.95th=[ 556],
| 99.99th=[ 3248]
bw ( MiB/s): min= 1394, max= 1410, per=100.00%, avg=1403.87, stdev= 8.62, samples=3
iops : min=356884, max=361028, avg=359391.33, stdev=2204.93, samples=3
write: IOPS=359k, BW=1401MiB/s (1469MB/s)(2254MiB/1609msec); 0 zone resets
slat (nsec): min=700, max=18375, avg=863.21, stdev=143.09
clat (nsec): min=313, max=14695, avg=326.64, stdev=57.46
lat (nsec): min=1024, max=19020, avg=1189.84, stdev=157.53
clat percentiles (nsec):
| 1.00th=[ 318], 5.00th=[ 322], 10.00th=[ 322], 20.00th=[ 322],
| 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 326], 60.00th=[ 326],
| 70.00th=[ 326], 80.00th=[ 326], 90.00th=[ 330], 95.00th=[ 334],
| 99.00th=[ 362], 99.50th=[ 382], 99.90th=[ 494], 99.95th=[ 556],
| 99.99th=[ 3152]
bw ( MiB/s): min= 1389, max= 1414, per=100.00%, avg=1402.06, stdev=12.66, samples=3
iops : min=355586, max=362056, avg=358927.33, stdev=3240.24, samples=3
lat (nsec) : 500=99.91%, 750=0.05%, 1000=0.01%
lat (usec) : 4=0.03%, 10=0.01%, 20=0.01%, 50=0.01%
cpu : usr=29.48%, sys=70.46%, ctx=7, majf=0, minf=37
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=577372,577053,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
READ: bw=1402MiB/s (1470MB/s), 1402MiB/s-1402MiB/s (1470MB/s-1470MB/s), io=2255MiB (2365MB), run=1609-1609msec
WRITE: bw=1401MiB/s (1469MB/s), 1401MiB/s-1401MiB/s (1469MB/s-1469MB/s), io=2254MiB (2364MB), run=1609-1609msec



When I run the same test on the ZFS pool, regardless of which RAID configuration I use, I get this result:
root@pve0:/VM-POOL-ZFS# fio --randrepeat=1 --ioengine=libaio --direct=1 --name=test --filename=test --bs=4k --size=1G --readwrite=readwrite --ramp_time=4
test: (g=0): rw=rw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.39
Starting 1 process
test: Laying out IO file (1 file / 1024MiB)
Jobs: 1 (f=1): [M(1)][26.1%][r=50.8MiB/s,w=50.7MiB/s][r=13.0k,w=13.0k IOPS][eta 00m:17sJobs: 1 (f=1): [M(1)][25.0%][r=9272KiB/s,w=8996KiB/s][r=2318,w=2249 IOPS][eta 00m:21s] Jobs: 1 (f=1): [M(1)][78.9%][r=8840KiB/s,w=9137KiB/s][r=2210,w=2284 IOPS][eta 00m:12s]
test: (groupid=0, jobs=1): err= 0: pid=23303: Fri Nov 28 16:26:00 2025
read: IOPS=2600, BW=10.2MiB/s (10.6MB/s)(413MiB/40631msec)
slat (nsec): min=1793, max=2901.1k, avg=368456.37, stdev=166038.54
clat (nsec): min=366, max=27940, avg=1718.50, stdev=1231.23
lat (usec): min=2, max=2903, avg=370.17, stdev=166.73
clat percentiles (nsec):
| 1.00th=[ 374], 5.00th=[ 378], 10.00th=[ 394], 20.00th=[ 660],
| 30.00th=[ 692], 40.00th=[ 732], 50.00th=[ 972], 60.00th=[ 2672],
| 70.00th=[ 2768], 80.00th=[ 2864], 90.00th=[ 3024], 95.00th=[ 3280],
| 99.00th=[ 3696], 99.50th=[ 4016], 99.90th=[11712], 99.95th=[13376],
| 99.99th=[15680]
bw ( KiB/s): min= 8424, max=90576, per=95.63%, avg=9946.25, stdev=9091.42, samples=81
iops : min= 2106, max=22644, avg=2486.52, stdev=2272.85, samples=81
write: IOPS=2602, BW=10.2MiB/s (10.7MB/s)(413MiB/40631msec); 0 zone resets
slat (usec): min=2, max=754, avg=11.74, stdev=10.23
clat (nsec): min=373, max=19659, avg=1058.24, stdev=750.67
lat (usec): min=3, max=764, avg=12.80, stdev=10.75
clat percentiles (nsec):
| 1.00th=[ 394], 5.00th=[ 398], 10.00th=[ 406], 20.00th=[ 418],
| 30.00th=[ 430], 40.00th=[ 442], 50.00th=[ 482], 60.00th=[ 1688],
| 70.00th=[ 1704], 80.00th=[ 1736], 90.00th=[ 1784], 95.00th=[ 1816],
| 99.00th=[ 2040], 99.50th=[ 2160], 99.90th=[ 4320], 99.95th=[11584],
| 99.99th=[15680]
bw ( KiB/s): min= 7616, max=90440, per=95.54%, avg=9946.86, stdev=9084.36, samples=81
iops : min= 1904, max=22610, avg=2486.67, stdev=2271.09, samples=81
lat (nsec) : 500=32.91%, 750=14.89%, 1000=4.12%
lat (usec) : 2=24.18%, 4=23.59%, 10=0.20%, 20=0.11%, 50=0.01%
cpu : usr=1.19%, sys=17.23%, ctx=89910, majf=0, minf=37
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=105643,105746,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
READ: bw=10.2MiB/s (10.6MB/s), 10.2MiB/s-10.2MiB/s (10.6MB/s-10.6MB/s), io=413MiB (433MB), run=40631-40631msec
WRITE: bw=10.2MiB/s (10.7MB/s), 10.2MiB/s-10.2MiB/s (10.7MB/s-10.7MB/s), io=413MiB (433MB), run=40631-40631msec



root@pve0:~# zpool get all VM-POOL-ZFS
NAME PROPERTY VALUE SOURCE
VM-POOL-ZFS size 3.48T -
VM-POOL-ZFS capacity 0% -
VM-POOL-ZFS altroot - default
VM-POOL-ZFS health ONLINE -
VM-POOL-ZFS guid 14447939841489202179 -
VM-POOL-ZFS version - default
VM-POOL-ZFS bootfs - default
VM-POOL-ZFS delegation on default
VM-POOL-ZFS autoreplace off default
VM-POOL-ZFS cachefile - default
VM-POOL-ZFS failmode wait default
VM-POOL-ZFS listsnapshots off default
VM-POOL-ZFS autoexpand off default
VM-POOL-ZFS dedupratio 1.00x -
VM-POOL-ZFS free 3.48T -
VM-POOL-ZFS allocated 888K -
VM-POOL-ZFS readonly off -
VM-POOL-ZFS ashift 12 local
VM-POOL-ZFS comment - default
VM-POOL-ZFS expandsize - -
VM-POOL-ZFS freeing 0 -
VM-POOL-ZFS fragmentation 0% -
VM-POOL-ZFS leaked 0 -
VM-POOL-ZFS multihost off default
VM-POOL-ZFS checkpoint - -
VM-POOL-ZFS load_guid 5867072020990024090 -
VM-POOL-ZFS autotrim off default
VM-POOL-ZFS compatibility off default
VM-POOL-ZFS bcloneused 0 -
VM-POOL-ZFS bclonesaved 0 -
VM-POOL-ZFS bcloneratio 1.00x -
VM-POOL-ZFS dedup_table_size 0 -
VM-POOL-ZFS dedup_table_quota auto default
VM-POOL-ZFS last_scrubbed_txg 0 -
VM-POOL-ZFS feature@async_destroy enabled local
VM-POOL-ZFS feature@empty_bpobj enabled local
VM-POOL-ZFS feature@lz4_compress active local
...

root@pve0:~# zfs get all VM-POOL-ZFS
NAME PROPERTY VALUE SOURCE
VM-POOL-ZFS type filesystem -
VM-POOL-ZFS creation Fri Nov 28 16:22 2025 -
VM-POOL-ZFS used 645K -
VM-POOL-ZFS available 2.45T -
VM-POOL-ZFS referenced 140K -
VM-POOL-ZFS compressratio 1.00x -
VM-POOL-ZFS mounted yes -
VM-POOL-ZFS quota none default
VM-POOL-ZFS reservation none default
VM-POOL-ZFS recordsize 128K default
VM-POOL-ZFS mountpoint /VM-POOL-ZFS default
VM-POOL-ZFS sharenfs off default
VM-POOL-ZFS checksum on default
VM-POOL-ZFS compression lz4 local
VM-POOL-ZFS atime on default
VM-POOL-ZFS devices on default
VM-POOL-ZFS exec on default
VM-POOL-ZFS setuid on default
VM-POOL-ZFS readonly off default
VM-POOL-ZFS zoned off default
VM-POOL-ZFS snapdir hidden default
VM-POOL-ZFS aclmode discard default
VM-POOL-ZFS aclinherit restricted default
VM-POOL-ZFS createtxg 1 -
VM-POOL-ZFS canmount on default
VM-POOL-ZFS xattr on default
VM-POOL-ZFS copies 1 default
VM-POOL-ZFS version 5 -
VM-POOL-ZFS utf8only off -
VM-POOL-ZFS normalization none -
VM-POOL-ZFS casesensitivity sensitive -
VM-POOL-ZFS vscan off default
VM-POOL-ZFS nbmand off default
VM-POOL-ZFS sharesmb off default
VM-POOL-ZFS refquota none default
VM-POOL-ZFS refreservation none default
VM-POOL-ZFS guid 15949081056026194856 -
VM-POOL-ZFS primarycache all default
VM-POOL-ZFS secondarycache all default
VM-POOL-ZFS usedbysnapshots 0B -
VM-POOL-ZFS usedbydataset 140K -
VM-POOL-ZFS usedbychildren 506K -
VM-POOL-ZFS usedbyrefreservation 0B -
VM-POOL-ZFS logbias latency default
VM-POOL-ZFS objsetid 54 -
VM-POOL-ZFS dedup off default
VM-POOL-ZFS mlslabel none default
VM-POOL-ZFS sync standard default
VM-POOL-ZFS dnodesize legacy default
VM-POOL-ZFS refcompressratio 1.00x -
VM-POOL-ZFS written 140K -
VM-POOL-ZFS logicalused 152K -
VM-POOL-ZFS logicalreferenced 42K -
...


My ZFS performance is very slow.
Is this normal in this setup?
Or do I have a configuration error somewhere?



Kind Regards,