[SOLVED] ZFS performance on hypervisor vs VM

showiproute

Well-Known Member
Mar 11, 2020
615
32
48
36
Austria
Hello everyone,

I am getting strange disk write benchmakrs compared Proxmox directly vs a Ubuntu 20.04 VM.

General: 4x Non-Enterprise SATA "Seagate BarraCuda Compute 2TB, 2.5" running as ZFS RAID10.

Write tests using
Code:
sync; dd if=/dev/zero of=tempfile bs=1M count=20480; sync

If I run this test directly on Proxmox I get following results:

ZFS sync=standard
21474836480 bytes (21 GB, 20 GiB) copied, 12.9134 s, 1.7 GB/s

ZFS sync=disabled
21474836480 bytes (21 GB, 20 GiB) copied, 11.5583 s, 1.9 GB/s




On my VM I get totally different values:
ZFS sync=standard
21474836480 Bytes (21 GB, 20 GiB) copied, 181.414 s, 118 MB/s

ZFS sync=disabled
21474836480 Bytes (21 GB, 20 GiB) copied, 22.5636 s, 952 MB/s




Anyone got an explanation for such different values?




EDIT:
Some system specifications:

CPU: AMD Epyc 7252 (8C/16T)
Memory: 128 GB DDR4 ECC RAM
Mainboard: Supermicro H12SSL-CT
Additional Storagecard: Supermicro AOC-S3008L-L8e (HBA-mode)
 
Last edited:
Today I have also verified the read speeds (this time with FIO) as my VM backup with Veeam reads average 5 MB/s.
The command for fio would be:
fio --filename=test --sync=1 --rw=randread --bs=4k --numjobs=1 --iodepth=4 --group_reporting --name=test --filesize=10G --runtime=300 && rm test

Results:
Hypervisor
Code:
fio-3.12
Starting 1 process
test: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1): [r(1)][98.0%][r=340MiB/s][r=87.1k IOPS][eta 00m:01s]
test: (groupid=0, jobs=1): err= 0: pid=18714: Mon Apr 12 07:26:35 2021
  read: IOPS=55.1k, BW=215MiB/s (226MB/s)(10.0GiB/47547msec)
    clat (nsec): min=1650, max=2391.5k, avg=17601.52, stdev=11656.24
     lat (nsec): min=1700, max=2391.5k, avg=17647.33, stdev=11656.91
    clat percentiles (usec):
     |  1.00th=[    3],  5.00th=[    4], 10.00th=[    4], 20.00th=[    4],
     | 30.00th=[    5], 40.00th=[   23], 50.00th=[   24], 60.00th=[   25],
     | 70.00th=[   25], 80.00th=[   26], 90.00th=[   28], 95.00th=[   29],
     | 99.00th=[   37], 99.50th=[   42], 99.90th=[   62], 99.95th=[  172],
     | 99.99th=[  206]
   bw (  KiB/s): min=153448, max=528816, per=99.77%, avg=220036.33, stdev=41538.66, samples=95
   iops        : min=38362, max=132204, avg=55009.14, stdev=10384.71, samples=95
  lat (usec)   : 2=0.17%, 4=28.46%, 10=7.48%, 20=0.37%, 50=63.35%
  lat (usec)   : 100=0.10%, 250=0.07%, 500=0.01%, 1000=0.01%
  lat (msec)   : 4=0.01%
  cpu          : usr=3.85%, sys=95.51%, ctx=2235, majf=0, minf=9
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=2621440,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
   READ: bw=215MiB/s (226MB/s), 215MiB/s-215MiB/s (226MB/s-226MB/s), io=10.0GiB (10.7GB), run=47547-47547msec


VM
Code:
fio-3.16
Starting 1 process
test: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1): [r(1)][100.0%][r=54.0MiB/s][r=13.8k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=780477: Mon Apr 12 07:41:10 2021
  read: IOPS=14.1k, BW=54.0MiB/s (57.7MB/s)(10.0GiB/186225msec)
    clat (usec): min=32, max=81668, avg=70.12, stdev=107.52
     lat (usec): min=32, max=81668, avg=70.21, stdev=107.52
    clat percentiles (usec):
     |  1.00th=[   54],  5.00th=[   60], 10.00th=[   62], 20.00th=[   63],
     | 30.00th=[   64], 40.00th=[   66], 50.00th=[   67], 60.00th=[   69],
     | 70.00th=[   71], 80.00th=[   74], 90.00th=[   79], 95.00th=[   86],
     | 99.00th=[  115], 99.50th=[  153], 99.90th=[  388], 99.95th=[  506],
     | 99.99th=[ 1565]
   bw (  KiB/s): min=42320, max=63256, per=100.00%, avg=56303.81, stdev=3137.59, samples=372
   iops        : min=10580, max=15814, avg=14075.96, stdev=784.41, samples=372
  lat (usec)   : 50=0.14%, 100=98.04%, 250=1.59%, 500=0.18%, 750=0.02%
  lat (usec)   : 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
  lat (msec)   : 100=0.01%
  cpu          : usr=5.06%, sys=17.14%, ctx=2623332, majf=0, minf=13
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=2621440,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
   READ: bw=54.0MiB/s (57.7MB/s), 54.0MiB/s-54.0MiB/s (57.7MB/s-57.7MB/s), io=10.0GiB (10.7GB), run=186225-186225msec

Disk stats (read/write):
  sdb: ios=2635878/116, merge=39008/65, ticks=219425/4368, in_queue=59496, util=100.00%
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!