Hey everyone, just installed proxmox and I'm experiencing a pretty slow performance on my disk, to the point of taking 1:30 to install Ubuntu on a VM. Not sure if that's expected with the disk I have (Crucial P2 CT500P2SSD8 SSD).
A simple fio benchmark:
Trying to max out the disk, using a command I found on Google Cloud Platform:
It seems that I have a pretty low value for the FSYNCS/SECOND field, as it should be > 200. Is the performance expected for this particular disk? If so, what disk would you guys recommend?
Code:
root@pve:~# pveperf
CPU BOGOMIPS: 8908.80
REGEX/SECOND: 2561482
HD SIZE: 93.93 GB (/dev/mapper/pve-root)
BUFFERED READS: 122.53 MB/sec
AVERAGE SEEK TIME: 0.09 ms
FSYNCS/SECOND: 43.26
DNS EXT: 65.81 ms
DNS INT: 0.87 ms (lan)
A simple fio benchmark:
Code:
root@pve:~# fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based --name seq_read --filename=/dev/nvme0n1p3
seq_read: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.25
Starting 1 process
Jobs: 1 (f=0): [f(1)][100.0%][r=4KiB/s][r=1 IOPS][eta 00m:00s]
seq_read: (groupid=0, jobs=1): err= 0: pid=37683: Sun Jul 17 23:53:20 2022
read: IOPS=2866, BW=11.2MiB/s (11.7MB/s)(760MiB/67886msec)
slat (usec): min=2, max=261, avg=21.78, stdev=32.09
clat (nsec): min=445, max=12588M, avg=320445.83, stdev=31074362.26
lat (usec): min=16, max=12588k, avg=342.56, stdev=31074.24
clat percentiles (nsec):
| 1.00th=[ 540], 5.00th=[ 15424], 10.00th=[ 15808],
| 20.00th=[ 16064], 30.00th=[ 16320], 40.00th=[ 16512],
| 50.00th=[ 17024], 60.00th=[ 17536], 70.00th=[ 55552],
| 80.00th=[ 93696], 90.00th=[ 109056], 95.00th=[ 536576],
| 99.00th=[ 602112], 99.50th=[ 675840], 99.90th=[ 5341184],
| 99.95th=[ 8716288], 99.99th=[742391808]
bw ( KiB/s): min= 8, max=68248, per=100.00%, avg=17894.25, stdev=20356.58, samples=87
iops : min= 2, max=17062, avg=4473.56, stdev=5089.18, samples=87
lat (nsec) : 500=0.12%, 750=1.55%, 1000=0.01%
lat (usec) : 2=0.01%, 4=0.20%, 10=0.01%, 20=64.63%, 50=2.71%
lat (usec) : 100=16.26%, 250=5.67%, 500=1.39%, 750=7.04%, 1000=0.04%
lat (msec) : 2=0.03%, 4=0.11%, 10=0.20%, 20=0.01%, 50=0.01%
lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 2000=0.01%, >=2000=0.01%
cpu : usr=3.82%, sys=8.97%, ctx=188422, majf=0, minf=18
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=194601,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=11.2MiB/s (11.7MB/s), 11.2MiB/s-11.2MiB/s (11.7MB/s-11.7MB/s), io=760MiB (797MB), run=67886-67886msec
Disk stats (read/write):
nvme0n1: ios=194620/3313, merge=0/659, ticks=49078/1527957, in_queue=1613395, util=99.47%
Trying to max out the disk, using a command I found on Google Cloud Platform:
Code:
root@pve:~# fio --time_based --name=benchmark --size=100G --runtime=30 --filename=/dev/nvme0n1p3 --ioengine=libaio --randrepeat=0 --iodepth=128 --direct=1 --invalidate=
1 --verify=0 --verify_fatal=0 --numjobs=4 --rw=randread --blocksize=4k --group_repor
ting
benchmark: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
...
fio-3.25
Starting 4 processes
Jobs: 4 (f=4): [r(4)][100.0%][r=277MiB/s][r=70.0k IOPS][eta 00m:00s]
benchmark: (groupid=0, jobs=4): err= 0: pid=2166: Mon Jul 18 00:05:21 2022
read: IOPS=71.3k, BW=278MiB/s (292MB/s)(8358MiB/30028msec)
slat (nsec): min=1993, max=730558, avg=3920.53, stdev=6542.33
clat (usec): min=63, max=75268, avg=7179.77, stdev=7142.57
lat (usec): min=66, max=75271, avg=7183.78, stdev=7142.61
clat percentiles (usec):
| 1.00th=[ 2212], 5.00th=[ 3163], 10.00th=[ 3490], 20.00th=[ 4113],
| 30.00th=[ 4490], 40.00th=[ 4883], 50.00th=[ 5145], 60.00th=[ 5473],
| 70.00th=[ 5866], 80.00th=[ 6587], 90.00th=[12911], 95.00th=[22938],
| 99.00th=[41681], 99.50th=[49021], 99.90th=[60031], 99.95th=[62653],
| 99.99th=[66323]
bw ( KiB/s): min=128280, max=314040, per=100.00%, avg=285210.40, stdev=7946.70, samples=240
iops : min=32070, max=78510, avg=71302.67, stdev=1986.68, samples=240
lat (usec) : 100=0.01%, 250=0.03%, 500=0.05%, 750=0.09%, 1000=0.09%
lat (msec) : 2=0.55%, 4=17.05%, 10=69.73%, 20=6.34%, 50=5.64%
lat (msec) : 100=0.43%
cpu : usr=4.21%, sys=13.55%, ctx=1885478, majf=0, minf=570
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwts: total=2139588,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=128
Run status group 0 (all jobs):
READ: bw=278MiB/s (292MB/s), 278MiB/s-278MiB/s (292MB/s-292MB/s), io=8358MiB (8764MB), run=30028-30028msec
Disk stats (read/write):
nvme0n1: ios=2130347/333, merge=0/204, ticks=15279714/6156, in_queue=15288276, util=99.73%
It seems that I have a pretty low value for the FSYNCS/SECOND field, as it should be > 200. Is the performance expected for this particular disk? If so, what disk would you guys recommend?