Poor ZFS performance On Supermicro vs random ASUS board

Integral

Active Member
Oct 19, 2015
14
1
43
Hello all,

i have a ZFS problem, my production server has way less performance then my testing server, and i am trying to find the cause for last 2 days. Ill provide anything you need in form of logs. Can you help me find the root cause?

Both servers:
  • SmartCTL -t long /dev/sdx=> reports all ok
  • All partitions are aligned
  • ZFS Compression=on, sync=standard (data integrity is important)
  • Connected via 2 NICs with bonding on (lacp layer3+4)
  • Disks are ALL WD RED NAS 4x4TB for spinning drives, 1x Samsung 850EVO Pro
  • ZFS is used as ROOT FIlesystem (mounted on /)
  • All Sata links report 6GB/s also everything is in AHCI mode
Differences:
  • Motherboard (production server is enterprise-grade supermicro with ipmi, test is just some random asus board)
  • Processor (Production server has 2 physical cpus)
  • Ram (Production server has DDR4 with ecc, test does not have that)

Server 1: (Test server)

Code:
[root@px0003:~]# pveversion -v
proxmox-ve: 5.0-15 (running kernel: 4.10.15-1-pve)
pve-manager: 5.0-23 (running version: 5.0-23/af4267bf)
pve-kernel-4.10.15-1-pve: 4.10.15-15
libpve-http-server-perl: 2.0-5
lvm2: 2.02.168-pve2
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-10
qemu-server: 5.0-12
pve-firmware: 2.0-2
libpve-common-perl: 5.0-16
libpve-guest-common-perl: 2.0-11
libpve-access-control: 5.0-5
libpve-storage-perl: 5.0-12
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.0-8
pve-qemu-kvm: 2.9.0-2
pve-container: 2.0-14
pve-firewall: 3.0-1
pve-ha-manager: 2.0-2
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.8-3
lxcfs: 2.0.7-pve2
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.9-pve16~bpo90
Code:
[root@px0003:~]# pveperf

CPU BOGOMIPS:      67200.80
REGEX/SECOND:      2748754
HD SIZE:           7099.24 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND:     750.84
DNS EXT:           35.12 ms
DNS INT:           38.78 ms (xxxxxxxx)
Code:
[root@px0003:~]# zpool status
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Sun Oct  8 00:24:52 2017
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sda2    ONLINE       0     0     0
            sdb2    ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            sdd     ONLINE       0     0     0
            sde     ONLINE       0     0     0
        logs
          sdc1      ONLINE       0     0     0
        cache
          sdc2      ONLINE       0     0     0

Server 2: (Production server)
Code:
[root@px0001:~]# pveversion -v
proxmox-ve: 4.4-96 (running kernel: 4.4.83-1-pve)
pve-manager: 4.4-18 (running version: 4.4-18/ef2610e8)
pve-kernel-4.4.35-1-pve: 4.4.35-77
pve-kernel-4.4.83-1-pve: 4.4.83-96
pve-kernel-4.4.62-1-pve: 4.4.62-88
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-2~pve4+1
libqb0: 1.0.1-1
pve-cluster: 4.0-53
qemu-server: 4.0-113
pve-firmware: 1.1-11
libpve-common-perl: 4.0-96
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-2
pve-docs: 4.4-4
pve-qemu-kvm: 2.9.0-5~pve4
pve-container: 1.0-101
pve-firewall: 2.0-33
pve-ha-manager: 1.0-41
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-4
lxcfs: 2.0.6-pve1
criu: 1.6.0-1
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.9-pve15~bpo80

Code:
[root@px0001:~]# pveperf
CPU BOGOMIPS:      153619.20
REGEX/SECOND:      2396879
HD SIZE:           7099.24 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND:     197.00
DNS EXT:           7.99 ms
DNS INT:           9.22 ms (xxxxxxxx)

Code:
[root@px0001:~]# zpool status
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0 in 1h29m with 0 errors on Tue Oct 24 12:50:40 2017
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sda2    ONLINE       0     0     0
            sdb2    ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdd     ONLINE       0     0     0
        logs
          sde1      ONLINE       0     0     0
        cache
          sde2      ONLINE       0     0     0

errors: No known data errors
 
Ram test ECC vs no-ECC
# dd if=/dev/zero of=/dev/null bs=1M count=1000

ECC
1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.111525 s, 9.4 GB/s

no-ECC
1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.0525348 s, 20.0 GB/s

Try to compare ZFS pool settings in both computers.


ZFS Compression=on - I dont remember what is the default compression method. Try to set to lz4
 
All seems same to me

Prod:
Code:
[root@px0001:~]# dd if=/dev/zero of=/dev/null bs=1M count=100000
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB) copied, 5.71882 s, 18.3 GB/s

Code:
[root@px0001:~]# zpool get all
NAME        PROPERTY                    VALUE                       SOURCE
rpool       size                        7.25T                       -
rpool       capacity                    0%                          -
rpool       altroot                     -                           default
rpool       health                      ONLINE                      -
rpool       guid                        965763982087456207          default
rpool       version                     -                           default
rpool       bootfs                      rpool/ROOT/pve-1            local
rpool       delegation                  on                          default
rpool       autoreplace                 off                         default
rpool       cachefile                   -                           default
rpool       failmode                    wait                        default
rpool       listsnapshots               off                         default
rpool       autoexpand                  off                         default
rpool       dedupditto                  0                           default
rpool       dedupratio                  1.00x                       -
rpool       free                        7.19T                       -
rpool       allocated                   58.3G                       -
rpool       readonly                    off                         -
rpool       ashift                      12                          local
rpool       comment                     -                           default
rpool       expandsize                  -                           -
rpool       freeing                     0                           default
rpool       fragmentation               1%                          -
rpool       leaked                      0                           default
rpool       feature@async_destroy       enabled                     local
rpool       feature@empty_bpobj         active                      local
rpool       feature@lz4_compress        active                      local
rpool       feature@spacemap_histogram  active                      local
rpool       feature@enabled_txg         active                      local
rpool       feature@hole_birth          active                      local
rpool       feature@extensible_dataset  enabled                     local
rpool       feature@embedded_data       active                      local
rpool       feature@bookmarks           enabled                     local
rpool       feature@filesystem_limits   enabled                     local
rpool       feature@large_blocks        enabled                     local

Test:
Code:
[root@px0003:~]# dd if=/dev/zero of=/dev/null bs=1M count=100000
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB, 98 GiB) copied, 6.24394 s, 16.8 GB/s

Code:
[root@px0003:~]# zpool get all
NAME   PROPERTY                    VALUE                       SOURCE
rpool  size                        7.25T                       -
rpool  capacity                    0%                          -
rpool  altroot                     -                           default
rpool  health                      ONLINE                      -
rpool  guid                        12407455624234699673        default
rpool  version                     -                           default
rpool  bootfs                      rpool/ROOT/pve-1            local
rpool  delegation                  on                          default
rpool  autoreplace                 off                         default
rpool  cachefile                   -                           default
rpool  failmode                    wait                        default
rpool  listsnapshots               off                         default
rpool  autoexpand                  off                         default
rpool  dedupditto                  0                           default
rpool  dedupratio                  1.00x                       -
rpool  free                        7.21T                       -
rpool  allocated                   3.17G                       -
rpool  readonly                    off                         -
rpool  ashift                      12                          local
rpool  comment                     -                           default
rpool  expandsize                  -                           -
rpool  freeing                     0                           default
rpool  fragmentation               0%                          -
rpool  leaked                      0                           default
rpool  feature@async_destroy       enabled                     local
rpool  feature@empty_bpobj         active                      local
rpool  feature@lz4_compress        active                      local
rpool  feature@spacemap_histogram  active                      local
rpool  feature@enabled_txg         active                      local
rpool  feature@hole_birth          active                      local
rpool  feature@extensible_dataset  enabled                     local
rpool  feature@embedded_data       active                      local
rpool  feature@bookmarks           enabled                     local
rpool  feature@filesystem_limits   enabled                     local
rpool  feature@large_blocks        enabled                     local
 
Hello, the thing is im trying to determine, why clearly the better server has such gulf in performance. Im trying to determine if i forgot something, some setting which should be enabled/disable on supermicro compared to the test server. 5x slower disks for no apparent reason is weird at best.

And we didnt try enterprise ssd, the thing is, it works perfectly fine on some random asus board.

Ill try to put a request higher up.
 
Hello, the thing is im trying to determine, why clearly the better server has such gulf in performance. Im trying to determine if i forgot something, some setting which should be enabled/disable on supermicro compared to the test server. 5x slower disks for no apparent reason is weird at best.

And we didnt try enterprise ssd, the thing is, it works perfectly fine on some random asus board.

Ill try to put a request higher up.
Hi,
I would not exclude the SSD (if you use the SSD as cache?), because without trim you can get very worst performance and the other SSD in your asus-system has not the same IO?!

BTW. was I learned with supermicro is to look for bios-updates...

Udo
 
Im using SSD only as log device, no cache. ive dedicated 16GB ram for arc on each server by editing /etc/modprobe/zfs.conf also the server was not under any load ive installed it and just found out, that stuff that takes my notebook about 15secs takes about 30 minutes on the supermicro server. All hardware is brand new.
 
Look at the disk response times with iostat from sysstat, especially at r_await value. Bad disks will show up there. Best is always to buy only enterprise grade hardware (disk and SSD) and I do not think hat SATA disks have enterprise grade firmware.

One bold move would be to swap the SSD and try again. Is this a EVO or a PRO, your description evo pro does not exist. In either way, the ssd not suited for slog workload, yet fine for l2arc, yet I would not recomment to use a l2arc with only 16 GB or RAM.
 
integral have you solved the problem? i have the same problem, Supermicro MB, WD RED NAS 4x1TB 5400-7200rpm with ZFS RAID10 32GB RAM ECC 16GB dedicated to ARC i have no disk for log or cache... the performacne is TOTALY POOR and i am helpless

root@pve-klenova:~# pveperf
CPU BOGOMIPS: 38401.52
REGEX/SECOND: 443946
HD SIZE: 680.44 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 29.09
DNS EXT: 142.89 ms
DNS INT: 29.85 ms (elson.sk)

latest bios there...
 
Last edited:
i know many people might disagree but run this to get your sync up. As long as you have UPS that can turn off your machine and you have backups then you should be good

Code:
zfs set sync=disabled rpool

then reboot and test it out
 
i know many people might disagree but run this to get your sync up. As long as you have UPS that can turn off your machine and you have backups then you should be good

Code:
zfs set sync=disabled rpool

then reboot and test it out
What I bad piece of advice!!! If the brakes is not working properly on your car so that your speed is limited would your advice then be to disconnect the brakes?
 
I know i would get hanged by saying that, but im talking out experience when one does not have ssd only 7200rpm with raid 1 and so far so good.
 
I am pretty sure this is a ZOL specific problem! Try using below as input file while running fio on your system
Code:
# This job file tries to mimic the Intel IOMeter File Server Access Pattern
[global]
description=Emulation of Intel IOmeter File Server Access Pattern

[iometer]
bssplit=512/10:1k/5:2k/5:4k/60:8k/2:16k/4:32k/4:64k/10
rw=randrw
rwmixread=80
direct=1
size=4g
ioengine=libaio
# IOMeter defines the server loads as the following:
# iodepth=1     Linear
# iodepth=4     Very Light
# iodepth=8     Light
# iodepth=64    Moderate
# iodepth=256   Heavy
iodepth=64

Above (tailored to Solaris requirements gives the following output on my Omnios storage server also on a SuperMicro motherboard in a RAID10 using 4x1TB WD Red.

iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=solarisaio, iodepth=64
fio-2.12
Starting 1 process
iometer: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [m(1)] [100.0% done] [16727KB/4029KB/0KB /s] [3935/959/0 iops] [eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=19252: Sat Dec 9 22:53:54 2017
Description : [Emulation of Intel IOmeter File Server Access Pattern]
read : io=3276.9MB, bw=13583KB/s, iops=2205, runt=247027msec
slat (usec): min=0, max=6523, avg= 4.16, stdev=11.41
clat (usec): min=0, max=4454.8K, avg=8042.85, stdev=92616.25
lat (usec): min=7, max=4454.8K, avg=8047.00, stdev=92616.29
clat percentiles (usec):
| 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7],
| 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 11],
| 70.00th=[ 13], 80.00th=[ 22], 90.00th=[ 83], 95.00th=[ 540],
| 99.00th=[160768], 99.50th=[481280], 99.90th=[1482752], 99.95th=[1892352],
| 99.99th=[3063808]
bw (KB /s): min= 339, max=50609, per=100.00%, avg=13592.53, stdev=7248.03
write: io=838868KB, bw=3395.9KB/s, iops=553, runt=247027msec
slat (usec): min=1, max=8536, avg= 4.49, stdev=23.69
clat (usec): min=1, max=4354.5K, avg=83516.31, stdev=99088.02
lat (usec): min=12, max=4354.5K, avg=83520.80, stdev=99088.06
clat percentiles (usec):
| 1.00th=[ 13], 5.00th=[ 19], 10.00th=[ 63], 20.00th=[51456],
| 30.00th=[59136], 40.00th=[66048], 50.00th=[78336], 60.00th=[92672],
| 70.00th=[100864], 80.00th=[114176], 90.00th=[130560], 95.00th=[154624],
| 99.00th=[272384], 99.50th=[552960], 99.90th=[1499136], 99.95th=[1875968],
| 99.99th=[2834432]
bw (KB /s): min= 40, max=13327, per=100.00%, avg=3396.90, stdev=1801.88
lat (usec) : 2=0.53%, 4=0.25%, 10=34.76%, 20=28.69%, 50=5.60%
lat (usec) : 100=7.51%, 250=0.79%, 500=0.40%, 750=1.48%, 1000=0.18%
lat (msec) : 2=0.23%, 4=0.13%, 10=0.32%, 20=0.45%, 50=0.95%
lat (msec) : 100=10.61%, 250=6.27%, 500=0.37%, 750=0.16%, 1000=0.12%
lat (msec) : 2000=0.17%, >=2000=0.04%
cpu : usr=102.25%, sys=3.57%, ctx=910871, majf=0, minf=0
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=544745/w=136798/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: io=3276.9MB, aggrb=13583KB/s, minb=13583KB/s, maxb=13583KB/s, mint=247027msec, maxt=247027msec
WRITE: io=838867KB, aggrb=3395KB/s, minb=3395KB/s, maxb=3395KB/s, mint=247027msec, maxt=247027msec
 
ok i make i file named testdisk and fill with your code after that i run fio

Code:
root@pve-klenova:~# fio testdisk
iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, iodepth=64
fio-2.16
Starting 1 process
iometer: Laying out IO file(s) (1 file(s) / 4096MB)
fio: looks like your file system does not support direct=1/buffered=0
fio: destination does not support O_DIRECT
fio: pid=27475, err=22/file:filesetup.c:623, func=open(iometer.0.0), error=Invalid argument


Run status group 0 (all jobs):
 
Last edited:
ok i ran it on the VM

Code:
root@merkur:~# fio testdisk
iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, iodepth=64
fio-2.2.10
Starting 1 process
iometer: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [m(1)] [100.0% done] [1140KB/268KB/0KB /s] [279/76/0 iops] [eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=4724: Sun Dec 10 01:00:40 2017
  Description  : [Emulation of Intel IOmeter File Server Access Pattern]
  read : io=3274.5MB, bw=2870.8KB/s, iops=471, runt=1167873msec
    slat (usec): min=16, max=812823, avg=54.85, stdev=1953.88
    clat (usec): min=2, max=2274.1K, avg=3106.55, stdev=20270.87
     lat (usec): min=100, max=2275.9K, avg=3163.21, stdev=20364.06
    clat percentiles (usec):
     |  1.00th=[  133],  5.00th=[  177], 10.00th=[  215], 20.00th=[  306],
     | 30.00th=[  516], 40.00th=[  908], 50.00th=[ 1336], 60.00th=[ 1784],
     | 70.00th=[ 2224], 80.00th=[ 2768], 90.00th=[ 3888], 95.00th=[ 6560],
     | 99.00th=[42752], 99.50th=[64256], 99.90th=[164864], 99.95th=[216064],
     | 99.99th=[440320]
    bw (KB  /s): min=    0, max=25653, per=100.00%, avg=3207.36, stdev=3326.78
  write: io=841688KB, bw=737998B/s, iops=117, runt=1167873msec
    slat (usec): min=19, max=944591, avg=241.95, stdev=8288.01
    clat (msec): min=3, max=4950, avg=530.08, stdev=642.13
     lat (msec): min=3, max=4950, avg=530.33, stdev=642.11
    clat percentiles (msec):
     |  1.00th=[   20],  5.00th=[   49], 10.00th=[   73], 20.00th=[  113],
     | 30.00th=[  153], 40.00th=[  200], 50.00th=[  269], 60.00th=[  375],
     | 70.00th=[  553], 80.00th=[  832], 90.00th=[ 1319], 95.00th=[ 1926],
     | 99.00th=[ 3064], 99.50th=[ 3359], 99.90th=[ 4015], 99.95th=[ 4228],
     | 99.99th=[ 4883]
    bw (KB  /s): min=    0, max= 6199, per=100.00%, avg=803.21, stdev=842.35
    lat (usec) : 4=0.01%, 10=0.04%, 20=0.01%, 50=0.01%, 100=0.04%
    lat (usec) : 250=11.61%, 500=11.88%, 750=5.44%, 1000=4.76%
    lat (msec) : 2=18.10%, 4=20.51%, 10=4.76%, 20=1.25%, 50=2.06%
    lat (msec) : 100=2.62%, 250=6.50%, 500=3.92%, 750=1.90%, 1000=1.37%
    lat (msec) : 2000=2.30%, >=2000=0.94%
  cpu          : usr=1.01%, sys=3.40%, ctx=187348, majf=0, minf=11
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=550156/w=137644/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: io=3274.5MB, aggrb=2870KB/s, minb=2870KB/s, maxb=2870KB/s, mint=1167873msec, maxt=1167873msec
  WRITE: io=841688KB, aggrb=720KB/s, minb=720KB/s, maxb=720KB/s, mint=1167873msec, maxt=1167873msec

Disk stats (read/write):
  sda: ios=550273/138095, merge=8/358, ticks=1178532/72768524, in_queue=74005392, util=100.00%

seems to be equal with you, is it ok in your opinion? what gives you pveperf on the pve?
 
ok i make i file named testdisk and fill with your code after that i run fio

Code:
root@pve-klenova:~# fio testdisk
iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, iodepth=64
fio-2.16
Starting 1 process
iometer: Laying out IO file(s) (1 file(s) / 4096MB)
fio: looks like your file system does not support direct=1/buffered=0
fio: destination does not support O_DIRECT
fio: pid=27475, err=22/file:filesetup.c:623, func=open(iometer.0.0), error=Invalid argument


Run status group 0 (all jobs):
Rerun this test with direct=0 (ZFS does not support direct io)
 
ok i ran it on the VM

Code:
root@merkur:~# fio testdisk
iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, iodepth=64
fio-2.2.10
Starting 1 process
iometer: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [m(1)] [100.0% done] [1140KB/268KB/0KB /s] [279/76/0 iops] [eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=4724: Sun Dec 10 01:00:40 2017
  Description  : [Emulation of Intel IOmeter File Server Access Pattern]
  read : io=3274.5MB, bw=2870.8KB/s, iops=471, runt=1167873msec
    slat (usec): min=16, max=812823, avg=54.85, stdev=1953.88
    clat (usec): min=2, max=2274.1K, avg=3106.55, stdev=20270.87
     lat (usec): min=100, max=2275.9K, avg=3163.21, stdev=20364.06
    clat percentiles (usec):
     |  1.00th=[  133],  5.00th=[  177], 10.00th=[  215], 20.00th=[  306],
     | 30.00th=[  516], 40.00th=[  908], 50.00th=[ 1336], 60.00th=[ 1784],
     | 70.00th=[ 2224], 80.00th=[ 2768], 90.00th=[ 3888], 95.00th=[ 6560],
     | 99.00th=[42752], 99.50th=[64256], 99.90th=[164864], 99.95th=[216064],
     | 99.99th=[440320]
    bw (KB  /s): min=    0, max=25653, per=100.00%, avg=3207.36, stdev=3326.78
  write: io=841688KB, bw=737998B/s, iops=117, runt=1167873msec
    slat (usec): min=19, max=944591, avg=241.95, stdev=8288.01
    clat (msec): min=3, max=4950, avg=530.08, stdev=642.13
     lat (msec): min=3, max=4950, avg=530.33, stdev=642.11
    clat percentiles (msec):
     |  1.00th=[   20],  5.00th=[   49], 10.00th=[   73], 20.00th=[  113],
     | 30.00th=[  153], 40.00th=[  200], 50.00th=[  269], 60.00th=[  375],
     | 70.00th=[  553], 80.00th=[  832], 90.00th=[ 1319], 95.00th=[ 1926],
     | 99.00th=[ 3064], 99.50th=[ 3359], 99.90th=[ 4015], 99.95th=[ 4228],
     | 99.99th=[ 4883]
    bw (KB  /s): min=    0, max= 6199, per=100.00%, avg=803.21, stdev=842.35
    lat (usec) : 4=0.01%, 10=0.04%, 20=0.01%, 50=0.01%, 100=0.04%
    lat (usec) : 250=11.61%, 500=11.88%, 750=5.44%, 1000=4.76%
    lat (msec) : 2=18.10%, 4=20.51%, 10=4.76%, 20=1.25%, 50=2.06%
    lat (msec) : 100=2.62%, 250=6.50%, 500=3.92%, 750=1.90%, 1000=1.37%
    lat (msec) : 2000=2.30%, >=2000=0.94%
  cpu          : usr=1.01%, sys=3.40%, ctx=187348, majf=0, minf=11
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=550156/w=137644/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: io=3274.5MB, aggrb=2870KB/s, minb=2870KB/s, maxb=2870KB/s, mint=1167873msec, maxt=1167873msec
  WRITE: io=841688KB, aggrb=720KB/s, minb=720KB/s, maxb=720KB/s, mint=1167873msec, maxt=1167873msec

Disk stats (read/write):
  sda: ios=550273/138095, merge=8/358, ticks=1178532/72768524, in_queue=74005392, util=100.00%

seems to be equal with you, is it ok in your opinion? what gives you pveperf on the pve?
What results you get inside the VM is of no concern in regards to getting the numbers from your disk system on the host.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!