Hi,
while working in a VM and installing some stuff, I noticed that disk writes are slower than they used to be in the past when running an OS bare-metal on the same hardware without having Proxmox in between. After taking a deeper look with
My pool consists of a single (no redundancy needed) Samsung 750 EVO SATA SSD:
A simple benchmark confirms that (even sequential) write speeds and IOPS are really bad:
I know, this SSD is not the newest one and probably never had the greatest IOPS, but I'd still expect much higher speeds than this.
Is there something wrong with my benchmark methodology or configuration? Or is it possible that the SSD performance extemely degraded over time?
Unfortunately, I do not have an up to date bare-metal test result for comparison and cannot create one right now.
while working in a VM and installing some stuff, I noticed that disk writes are slower than they used to be in the past when running an OS bare-metal on the same hardware without having Proxmox in between. After taking a deeper look with
zpool iostat 2
I saw that the write throughput never exceeded 24M, no matter what my VMs are doing.My pool consists of a single (no redundancy needed) Samsung 750 EVO SATA SSD:
Code:
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 0 days 00:10:15 with 0 errors on Sun Mar 8 00:34:16 2020
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
ata-Samsung_SSD_750_EVO_500GB_S36SNWAH778789E-part3 ONLINE 0 0 0
errors: No known data errors
Code:
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 464G 235G 229G - - 46% 50% 1.00x ONLINE -
A simple benchmark confirms that (even sequential) write speeds and IOPS are really bad:
Code:
# zfs create -sV 100G -o compression=off rpool/test # Ensure that compression does not affect my test results
# fio --filename=/dev/zvol/rpool/test --sync=1 --rw=write --bs=1M --numjobs=1 --iodepth=4 --group_reporting --name=test --filesize=10G
test: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=psync, iodepth=4
fio-3.12
Starting 1 process
Jobs: 1 (f=1): [f(1)][100.0%][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=11262: Sat Apr 4 18:59:19 2020
write: IOPS=14, BW=14.0MiB/s (14.7MB/s)(10.0GiB/729499msec); 0 zone resets
clat (msec): min=4, max=874, avg=71.19, stdev=63.00
lat (msec): min=4, max=874, avg=71.23, stdev=63.00
clat percentiles (msec):
| 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 27], 20.00th=[ 33],
| 30.00th=[ 36], 40.00th=[ 41], 50.00th=[ 51], 60.00th=[ 61],
| 70.00th=[ 79], 80.00th=[ 108], 90.00th=[ 142], 95.00th=[ 194],
| 99.00th=[ 313], 99.50th=[ 368], 99.90th=[ 550], 99.95th=[ 592],
| 99.99th=[ 701]
bw ( KiB/s): min= 2043, max=153600, per=100.00%, avg=14392.80, stdev=13525.47, samples=1456
iops : min= 1, max= 150, avg=13.93, stdev=13.22, samples=1456
lat (msec) : 10=6.34%, 20=1.63%, 50=41.57%, 100=27.92%, 250=20.35%
lat (msec) : 500=2.02%, 750=0.16%, 1000=0.01%
cpu : usr=0.06%, sys=1.96%, ctx=355011, majf=0, minf=10
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,10240,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=4
Run status group 0 (all jobs):
WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=10.0GiB (10.7GB), run=729499-729499msec
I know, this SSD is not the newest one and probably never had the greatest IOPS, but I'd still expect much higher speeds than this.
Is there something wrong with my benchmark methodology or configuration? Or is it possible that the SSD performance extemely degraded over time?
Unfortunately, I do not have an up to date bare-metal test result for comparison and cannot create one right now.