VMs guest Debian very slow cpu

pgro

Member
Oct 20, 2022
56
2
13
Hi Everyone

I need help with a vm running under proxmox. I got very low cpu stress test in comparison to hyper visor proxmox. Is there anything I can do ?

Thank you
 
Set KVM hardware virtualization back to Yes in the Options of the VM.
Can you share the VM configuration with the command qm config VMID (replace VMID with the number of your VM)?

EDIT: I misunderstood the question and did not realize that the virtual disk I/O was slow and not the CPU.
 
Last edited:
Set KVM hardware virtualization back to Yes in the Options of the VM.
Can you share the VM configuration with the command qm config VMID (replace VMID with the number of your VM)?
It's aready enabled. CPU is ok , I tested with 7z benchmark and results are ok. BUT, disks are not :( Ok the below benchmark command executed :


Command executed :
Code:
fio --directory=/tmp/test --ioengine=psync --name fio_test_file --direct=1 --rw=randwrite --bs=16k --size=1G --numjobs=16 --time_based --runtime=180 --group_reporting --norandommap

Results :

VM Guest-1 with Kernel 3.6.6
Code:
Jobs: 16 (f=16): [w(16)][100.0%][r=0KiB/s,w=19.9MiB/s][r=0,w=1272 IOPS][eta 00m:00s]
fio_test_file: (groupid=0, jobs=16): err= 0: pid=3893: Mon Feb 20 23:04:44 2023
  write: IOPS=1324, BW=20.8MiB/s (21.7MB/s)(3724MiB/180014msec)
    clat (usec): min=98, max=614055, avg=12071.11, stdev=15216.13
     lat (usec): min=102, max=614059, avg=12074.69, stdev=15216.08
    clat percentiles (usec):
     |  1.00th=[  225],  5.00th=[  494], 10.00th=[ 2768], 20.00th=[ 7776],
     | 30.00th=[ 8512], 40.00th=[ 9408], 50.00th=[10048], 60.00th=[11584],
     | 70.00th=[12736], 80.00th=[14016], 90.00th=[17280], 95.00th=[23168],
     | 99.00th=[55040], 99.50th=[102912], 99.90th=[232448], 99.95th=[248832],
     | 99.99th=[382976]
    lat (usec) : 100=0.01%, 250=1.50%, 500=3.55%, 750=1.11%, 1000=0.70%
    lat (msec) : 2=1.98%, 4=2.67%, 10=37.26%, 20=44.17%, 50=5.86%
    lat (msec) : 100=0.68%, 250=0.47%, 500=0.04%, 750=0.01%
  cpu          : usr=0.18%, sys=0.64%, ctx=239275, majf=0, minf=540
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=0,238359,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
WRITE: bw=20.8MiB/s (21.7MB/s), 20.8MiB/s-20.8MiB/s (21.7MB/s-21.7MB/s), io=3724MiB (3905MB), run=180014-180014msec [/CODE]

VM-Guest-2 with Kernel 3.6.6

Code:
Jobs: 1 (f=1): [_(15),w(1)][6.3%][r=0KiB/s,w=15.2MiB/s][r=0,w=968 IOPS][eta 44m:59s]
fio_test_file: (groupid=0, jobs=16): err= 0: pid=9693: Mon Feb 20 23:14:11 2023
  write: IOPS=1009, BW=15.8MiB/s (16.6MB/s)(2840MiB/180050msec)
    clat (usec): min=129, max=125944k, avg=15836.40, stdev=295475.54
     lat (usec): min=131, max=125944k, avg=15838.90, stdev=295475.53
    clat percentiles (usec):
     |  1.00th=[  884],  5.00th=[ 7968], 10.00th=[ 9536], 20.00th=[11200],
     | 30.00th=[12480], 40.00th=[13632], 50.00th=[14656], 60.00th=[15808],
     | 70.00th=[17024], 80.00th=[18560], 90.00th=[20864], 95.00th=[22912],
     | 99.00th=[29056], 99.50th=[34048], 99.90th=[116224], 99.95th=[175104],
     | 99.99th=[218112]
    lat (usec) : 250=0.06%, 500=0.12%, 750=0.52%, 1000=0.47%
    lat (msec) : 2=0.39%, 4=0.63%, 10=10.03%, 20=74.72%, 50=12.85%
    lat (msec) : 100=0.08%, 250=0.13%, >=2000=0.01%
  cpu          : usr=0.08%, sys=0.31%, ctx=181991, majf=0, minf=554
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=0,181753,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=15.8MiB/s (16.6MB/s), 15.8MiB/s-15.8MiB/s (16.6MB/s-16.6MB/s), io=2840MiB (2978MB), run=180050-180050msec

Disk stats (read/write):
  vda: ios=10/184350, merge=0/1531, ticks=9264/3289356, in_queue=21383268, util=100.00%

VM Guest-3 with Kernel 5.10.0-20
Code:
Jobs: 16 (f=16): [w(16)][100.0%][w=14.2MiB/s][w=909 IOPS][eta 00m:00s]
fio_test_file: (groupid=0, jobs=16): err= 0: pid=5185: Mon Feb 20 17:07:48 2023
  write: IOPS=1164, BW=18.2MiB/s (19.1MB/s)(3274MiB/180019msec); 0 zone resets
    clat (usec): min=793, max=288320, avg=13738.06, stdev=7717.48
     lat (usec): min=794, max=288321, avg=13739.26, stdev=7717.50
    clat percentiles (msec):
     |  1.00th=[    5],  5.00th=[    8], 10.00th=[   10], 20.00th=[   11],
     | 30.00th=[   12], 40.00th=[   12], 50.00th=[   13], 60.00th=[   13],
     | 70.00th=[   14], 80.00th=[   15], 90.00th=[   20], 95.00th=[   27],
     | 99.00th=[   39], 99.50th=[   54], 99.90th=[   80], 99.95th=[  102],
     | 99.99th=[  262]
   bw (  KiB/s): min= 1408, max=55392, per=100.00%, avg=18660.14, stdev=436.12, samples=5744
   iops        : min=   88, max= 3462, avg=1165.90, stdev=27.26, samples=5744
  lat (usec)   : 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.05%, 10=11.72%, 20=78.66%, 50=8.98%
  lat (msec)   : 100=0.52%, 250=0.04%, 500=0.02%
  cpu          : usr=0.12%, sys=0.94%, ctx=210472, majf=0, minf=207
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,209546,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=18.2MiB/s (19.1MB/s), 18.2MiB/s-18.2MiB/s (19.1MB/s-19.1MB/s), io=3274MiB (3433MB), run=180019-180019msec

Disk stats (read/write):
  vda: ios=0/210634, merge=0/32634, ticks=0/3007678, in_queue=3060897, util=100.00%

NOTE: VM-Guest-1 VM-Guest-3 running on Proxmox Hypervisor while VM-Guest-2 running on KVM/QEMU Redhat
 
Set KVM hardware virtualization back to Yes in the Options of the VM.
Can you share the VM configuration with the command qm config VMID (replace VMID with the number of your VM)?

EDIT: I misunderstood the question and did not realize that the virtual disk I/O was slow and not the CPU.
No, it's not your fault, Indeed I had issues with my CPU but later I discovered that Real numbers are different, thus while stress command might give you a single number of 2712 into VM, into hypervisor you get 6 No Figures like 312954 , but in the end, I performed a test with 7z and i figured out the correct CPU power.

Regarding qm config

Code:
agent: 0
autostart: 0
boot: order=virtio0
cores: 4
hotplug: disk,network,usb
kvm: 1
localtime: 1
machine: q35
memory: 4096
meta: creation-qemu=6.2.0,ctime=1665554950
name: VMA
net0: virtio=AE:B8:4E:5A:94:C7,bridge=vmbr0
net1: virtio=B2:96:70:D5:B2:E6,bridge=vmbr1
numa: 1
ostype: l26
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=52aa6351-ff45-4658-9615-15d55a696392
sockets: 1
tablet: 0
virtio0: local-zfs:vm-101-disk-0,discard=on,iothread=1,size=32G
virtio1: local-zfs:vm-101-disk-1,discard=on,iothread=1,size=17G
virtio2: local-zfs:vm-101-disk-2,discard=on,iothread=1,size=150G
virtio3: local-zfs:vm-101-disk-3,discard=on,iothread=1,size=500G
virtio4: backup-zfs:vm-101-disk-1,backup=0,cache=writeback,discard=on,iothread=1,size=1200G
virtio5: local-zfs:vm-101-disk-4,backup=0,discard=on,iothread=1,size=100G
vmgenid: b729eb5e-20a5-4a1d-be96-3593cbdf711c

But my problematic VM is an old Debian 5 with 3.6.6 Kernel, and while ALL other VMs are running fine, this one is not. So what options do I have to improve it?
 
But my problematic VM is an old Debian 5 with 3.6.6 Kernel, and while ALL other VMs are running fine, this one is not. So what options do I have to improve it?
Make a full clone (or make sure to have working backups!) and try using VirtIO SCSI Single as the virtual SCSI Controller and swith the virtual drives from VirtIO Block to SCSI. You might need to make changes inside the VM to make this switch work. This allows you to actually use IO Thread and distribute the I/O over separate threads so then interfere less with other parts of the VM.
 
Make a full clone (or make sure to have working backups!) and try using VirtIO SCSI Single as the virtual SCSI Controller and swith the virtual drives from VirtIO Block to SCSI. You might need to make changes inside the VM to make this switch work. This allows you to actually use IO Thread and distribute the I/O over separate threads so then interfere less with other parts of the VM.
It was VirtIO SCSI Single before with similar results. regarding virtual drives, changing to SCSI is worse :(

Code:
VirtIO SCSI single - Virtio Disk
srv:~# dd if=/dev/zero of=/dev/vdf bs=1M count=5120 conv=notrunc oflag=direct
5120+0 records in
5120+0 records out
5368709120 bytes (5.4 GB) copied, 9.96695 s, 539 MB/s

echo 3 > /proc/sys/vm/drop_caches
srv:~# dd if=/dev/vdf of=tempfile bs=1024M count=5 conv=notrunc oflag=direct
5+0 records in
5+0 records out
5368709120 bytes (5.4 GB) copied, 18.2718 s, 294 MB/s



Code:
VirtIO SCSI single - SCSI Disk


srv:~# dd if=/dev/zero of=/dev/sda bs=1M count=5120 conv=notrunc oflag=direct
5120+0 records in
5120+0 records out
5368709120 bytes (5.4 GB) copied, 12.1991 s, 440 MB/s


echo 3 > /proc/sys/vm/drop_caches
srv:~# dd if=/dev/sda of=tempfile bs=1024M count=5 conv=notrunc oflag=direct
5+0 records in
5+0 records out
5368709120 bytes (5.4 GB) copied, 26.0489 s, 206 MB/s
 
I did that also with latest Debian and still results are very poor.

So, none of your "ALL other VMs", that "are running fine" is a recent Debian one, okay, but what actually are those other VMs, you are comparing to and what are their results? What are the results on the PVE-host itself?

But, better compare with a proper benchmark-tool like: fio!

Would alongside try with CPU-type: host.

How does the underlying storage look like? What is/are the exact model number(s) of the disks used? What controller are they connected to? Is it a HBA in IT-mode? What ZFS-raid-types are used?
 
I am trying to post as much info I can

My current System is a DELL with the below info:

1677069737669.png

DELL PowerEdge R630


00.14.7A
Power Supply.Slot.200.14.7A
Integrated Dell Remote Access Controller2.83.83.83
Intel(R) Ethernet 10G 4P X540/I350 rNDC - 24:6E:96:1E:4C:8A19.5.12
Intel(R) Gigabit 4P X540/I350 rNDC - 24:6E:96:1E:4C:8C19.5.12
Intel(R) Gigabit 4P X540/I350 rNDC - 24:6E:96:1E:4C:8D19.5.12
Intel(R) Ethernet 10G 4P X540/I350 rNDC - 24:6E:96:1E:4C:8819.5.12
BIOS2.15.0
PERC H330 Mini25.5.9.0001
Disk 0 in Backplane 1 of Integrated RAID Controller 1DN04
Disk 5 in Backplane 1 of Integrated RAID Controller 1DN04
Disk 1 in Backplane 1 of Integrated RAID Controller 1DN04
Disk 6 in Backplane 1 of Integrated RAID Controller 1DN04
BP13G+ 0:12.25
Lifecycle Controller2.83.83.83
Dell 32 Bit uEFI Diagnostics, version 4239, 4239A36, 4239.444239A36
Dell OS Driver Pack, 18.12.04, A0018.12.04
OS COLLECTOR 1.1, OSC_1.1, A00OSC_1.1
System CPLD1.0.1


4x SAS Disks as below

MaxCapableSpeed 12Gbs
MediaType HDD
Model AL14SEB18EP

The Perc H330 mini is configured as HBA mode.

Proxmox Configuration:




Code:
zpool status -v
  pool: backup01
 state: ONLINE
  scan: scrub repaired 0B in 00:13:38 with 0 errors on Sun Feb 12 00:37:39 2023
config:


        NAME        STATE     READ WRITE CKSUM
        backup01    ONLINE       0     0     0
          sdc       ONLINE       0     0     0


errors: No known data errors


  pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 01:48:06 with 0 errors on Sun Feb 12 02:12:08 2023
config:


        NAME                              STATE     READ WRITE CKSUM
        rpool                             ONLINE       0     0     0
          mirror-0                        ONLINE       0     0     0
            scsi-35000039838031c89-part3  ONLINE       0     0     0
            scsi-3500003983803294d-part3  ONLINE       0     0     0




Code:
/proc/cpuinfo

processor       : 11
vendor_id       : GenuineIntel
cpu family      : 6
model           : 63
model name      : Intel(R) Xeon(R) CPU E5-2643 v3 @ 3.40GHz
stepping        : 2
microcode       : 0x49
cpu MHz         : 3700.000
cache size      : 20480 KB
physical id     : 0
siblings        : 12
core id         : 5
cpu cores       : 6
apicid          : 11
initial apicid  : 11
fpu             : yes
fpu_exception   : yes
cpuid level     : 15
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts md_clear flush_l1d
vmx flags       : vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad ept_1gb flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest vapic_reg vid ple shadow_vmcs
bugs            : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_stale_data
bogomips        : 6799.95
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:


Code:
qm config 102
agent: 0
autostart: 0
boot: order=virtio0
cores: 4
hotplug: 0
localtime: 1
memory: 4096
meta: creation-qemu=6.2.0,ctime=1665554950
name: pve-zzzdummyiii-16k
net0: virtio=32:1F:E8:4F:44:8D,bridge=vmbr0
net1: virtio=82:D6:66:F8:DA:64,bridge=vmbr1,link_down=1
numa: 1
ostype: l26
scsihw: virtio-scsi-single
serial0: socket
smbios1: uuid=d507b2c3-1eab-4509-a56a-d00fa65f1a05
sockets: 1
tablet: 0
virtio0: pgdata:vm-102-disk-0,discard=on,iothread=1,size=32G
virtio1: pgdata:vm-102-disk-1,discard=on,iothread=1,size=17G
virtio2: pgdata:vm-102-disk-2,discard=on,iothread=1,size=150G
virtio3: pgdata:vm-102-disk-3,discard=on,iothread=1,size=500G
virtio4: pgdata:vm-102-disk-4,backup=0,discard=on,iothread=1,size=100G
vmgenid: edac5a28-f015-4cb3-846d-0498f95b085e
 
Last edited:
FIO TESTS:
Executed tests on 2 environment but actually are 3 tests, that's because VM Guest is using LUKS encryption while also plain /dev/vda exist for / mount point. The tests actually tested on pgdata:vm-102-disk-4

Proxmox FIO TESTS


Code:
1)
Random write IOPS (4 KB for single I/O) as file:
fio -direct=1 -iodepth=128 -rw=randwrite -ioengine=libaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/tmp/fio_test  -name=Rand_Write_Testing
Rand_Write_Testing: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
fio-3.25
Starting 1 process
Jobs: 1 (f=1): [w(1)][96.7%][w=25.3MiB/s][w=6473 IOPS][eta 00m:04s]
Rand_Write_Testing: (groupid=0, jobs=1): err= 0: pid=2793260: Wed Feb 22 09:57:32 2023
  write: IOPS=2200, BW=8801KiB/s (9012kB/s)(1024MiB/119140msec); 0 zone resets
    slat (usec): min=8, max=329137, avg=450.17, stdev=2687.93
    clat (usec): min=5, max=969734, avg=57574.37, stdev=61556.48
     lat (usec): min=22, max=976995, avg=58024.90, stdev=61977.31
    clat percentiles (usec):
     |  1.00th=[  1876],  5.00th=[  2180], 10.00th=[  3523], 20.00th=[ 14091],
     | 30.00th=[ 23987], 40.00th=[ 34341], 50.00th=[ 42206], 60.00th=[ 53740],
     | 70.00th=[ 68682], 80.00th=[ 90702], 90.00th=[122160], 95.00th=[152044],
     | 99.00th=[274727], 99.50th=[383779], 99.90th=[700449], 99.95th=[809501],
     | 99.99th=[943719]
   bw (  KiB/s): min=  488, max=79288, per=99.60%, avg=8766.28, stdev=8834.12, samples=237
   iops        : min=  122, max=19822, avg=2191.57, stdev=2208.53, samples=237
  lat (usec)   : 10=0.01%, 50=0.01%, 100=0.01%, 250=0.01%, 500=0.01%
  lat (usec)   : 750=0.01%, 1000=0.01%
  lat (msec)   : 2=3.04%, 4=7.85%, 10=5.11%, 20=10.54%, 50=30.47%
  lat (msec)   : 100=26.53%, 250=15.20%, 500=1.05%, 750=0.12%, 1000=0.08%
  cpu          : usr=1.13%, sys=8.71%, ctx=88421, majf=0, minf=13
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
     issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=128

Run status group 0 (all jobs):
  WRITE: bw=8801KiB/s (9012kB/s), 8801KiB/s-8801KiB/s (9012kB/s-9012kB/s), io=1024MiB (1074MB), run=119140-119140msec


Random write IOPS (4 KB for single I/O) as dev:
fio -direct=1 -iodepth=128 -rw=randwrite -ioengine=libaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/dev/zvol/rpool/vm-102-disk-4 -name=Rand_Write_Testing
Rand_Write_Testing: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
fio-3.25
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][w=30.2MiB/s][w=7720 IOPS][eta 00m:00s]
Rand_Write_Testing: (groupid=0, jobs=1): err= 0: pid=2756676: Wed Feb 22 09:53:52 2023
  write: IOPS=8336, BW=32.6MiB/s (34.1MB/s)(1024MiB/31447msec); 0 zone resets
    slat (usec): min=2, max=1674, avg= 6.41, stdev= 6.25
    clat (usec): min=937, max=230060, avg=15345.82, stdev=9136.42
     lat (usec): min=942, max=230072, avg=15352.47, stdev=9136.55
    clat percentiles (msec):
     |  1.00th=[    5],  5.00th=[    6], 10.00th=[    7], 20.00th=[    9],
     | 30.00th=[   11], 40.00th=[   13], 50.00th=[   15], 60.00th=[   17],
     | 70.00th=[   18], 80.00th=[   21], 90.00th=[   23], 95.00th=[   26],
     | 99.00th=[   54], 99.50th=[   66], 99.90th=[   95], 99.95th=[  110],
     | 99.99th=[  136]
   bw (  KiB/s): min=18712, max=70816, per=100.00%, avg=33346.58, stdev=12536.28, samples=62
   iops        : min= 4678, max=17704, avg=8336.68, stdev=3134.06, samples=62
  lat (usec)   : 1000=0.01%
  lat (msec)   : 2=0.06%, 4=0.91%, 10=26.05%, 20=53.09%, 50=18.67%
  lat (msec)   : 100=1.14%, 250=0.07%
  cpu          : usr=4.60%, sys=7.43%, ctx=227528, majf=0, minf=11
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
     issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=128

Run status group 0 (all jobs):
  WRITE: bw=32.6MiB/s (34.1MB/s), 32.6MiB/s-32.6MiB/s (34.1MB/s-34.1MB/s), io=1024MiB (1074MB), run=31447-31447msec


2)
Random read IOPS (4KB for single I/O):
fio -direct=1 -iodepth=128 -rw=randread -ioengine=libaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/dev/zvol/rpool/vm-102-disk-4 -name=Rand_Read_Testing
Rand_Read_Testing: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
fio-3.25
Starting 1 process
Jobs: 1 (f=1): [r(1)][92.6%][r=127MiB/s][r=32.5k IOPS][eta 00m:05s]
Rand_Read_Testing: (groupid=0, jobs=1): err= 0: pid=2865041: Wed Feb 22 10:05:13 2023
  read: IOPS=4140, BW=16.2MiB/s (16.0MB/s)(1024MiB/63315msec)
    slat (usec): min=3, max=1154, avg= 6.15, stdev= 4.66
    clat (usec): min=585, max=326092, avg=30906.47, stdev=34882.82
     lat (usec): min=589, max=326098, avg=30912.86, stdev=34884.17
    clat percentiles (usec):
     |  1.00th=[   660],  5.00th=[  1106], 10.00th=[  1844], 20.00th=[  3621],
     | 30.00th=[  5997], 40.00th=[  9634], 50.00th=[ 15533], 60.00th=[ 23987],
     | 70.00th=[ 39060], 80.00th=[ 58983], 90.00th=[ 86508], 95.00th=[104334],
     | 99.00th=[135267], 99.50th=[152044], 99.90th=[185598], 99.95th=[202376],
     | 99.99th=[246416]
   bw (  KiB/s): min= 4224, max=163768, per=95.40%, avg=15799.68, stdev=22701.83, samples=126
   iops        : min= 1056, max=40944, avg=3949.94, stdev=5675.56, samples=126
  lat (usec)   : 750=2.31%, 1000=1.83%
  lat (msec)   : 2=6.73%, 4=11.15%, 10=18.86%, 20=14.92%, 50=20.27%
  lat (msec)   : 100=17.81%, 250=6.10%, 500=0.01%
  cpu          : usr=2.17%, sys=3.40%, ctx=162876, majf=0, minf=139
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
     issued rwts: total=262144,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=128

Run status group 0 (all jobs):
   READ: bw=16.2MiB/s (16.0MB/s), 16.2MiB/s-16.2MiB/s (16.0MB/s-16.0MB/s), io=1024MiB (1074MB), run=63315-63315msec
 

3) Sequential write throughput (write bandwidth) (1024 KB for single I/O):
fio -direct=1 -iodepth=64 -rw=write -ioengine=libaio -bs=1024k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/dev/zvol/rpool/vm-102-disk-4 -name=Write_PPS_Testing
Write_PPS_Testing: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64
fio-3.25
Starting 1 process
Jobs: 1 (f=1): [f(1)][-.-%][eta 00m:00s]
Write_PPS_Testing: (groupid=0, jobs=1): err= 0: pid=47519: Wed Feb 22 10:18:21 2023
  write: IOPS=2443, BW=2444MiB/s (2563MB/s)(1024MiB/419msec); 0 zone resets
    slat (usec): min=49, max=4537, avg=291.74, stdev=422.64
    clat (usec): min=1224, max=62234, avg=25352.09, stdev=13290.72
     lat (usec): min=1368, max=62393, avg=25644.30, stdev=13266.78
    clat percentiles (usec):
     |  1.00th=[ 7635],  5.00th=[ 9896], 10.00th=[10683], 20.00th=[12911],
     | 30.00th=[13960], 40.00th=[17433], 50.00th=[23462], 60.00th=[28443],
     | 70.00th=[31851], 80.00th=[38011], 90.00th=[43779], 95.00th=[50070],
     | 99.00th=[58983], 99.50th=[61604], 99.90th=[62129], 99.95th=[62129],
     | 99.99th=[62129]
  lat (msec)   : 2=0.20%, 4=0.10%, 10=5.57%, 20=36.13%, 50=52.25%
  lat (msec)   : 100=5.76%
  cpu          : usr=31.10%, sys=13.64%, ctx=196, majf=0, minf=10
  IO depths    : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.8%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=0,1024,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  WRITE: bw=2444MiB/s (2563MB/s), 2444MiB/s-2444MiB/s (2563MB/s-2563MB/s), io=1024MiB (1074MB), run=419-419msec

4) Sequential read throughput (read bandwidth) (1024 KB for single I/O):
fio -direct=1 -iodepth=64 -rw=read -ioengine=libaio -bs=1024k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/dev/zvol/rpool/vm-102-disk-4 -name=Read_PPS_Testing
Read_PPS_Testing: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64
fio-3.25
Starting 1 process

Read_PPS_Testing: (groupid=0, jobs=1): err= 0: pid=60667: Wed Feb 22 10:19:15 2023
  read: IOPS=2844, BW=2844MiB/s (2983MB/s)(1024MiB/360msec)
    slat (usec): min=44, max=4717, avg=337.96, stdev=655.39
    clat (usec): min=2585, max=33005, avg=20967.40, stdev=3535.14
     lat (usec): min=2678, max=33537, avg=21306.36, stdev=3534.44
    clat percentiles (usec):
     |  1.00th=[ 6521],  5.00th=[16909], 10.00th=[18744], 20.00th=[19530],
     | 30.00th=[20055], 40.00th=[20579], 50.00th=[21103], 60.00th=[21365],
     | 70.00th=[21890], 80.00th=[22676], 90.00th=[24249], 95.00th=[26346],
     | 99.00th=[30278], 99.50th=[31851], 99.90th=[32637], 99.95th=[32900],
     | 99.99th=[32900]
  lat (msec)   : 4=0.29%, 10=2.15%, 20=24.80%, 50=72.75%
  cpu          : usr=2.23%, sys=34.82%, ctx=170, majf=0, minf=16395
  IO depths    : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.8%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=2844MiB/s (2983MB/s), 2844MiB/s-2844MiB/s (2983MB/s-2983MB/s), io=1024MiB (1074MB), run=360-360msec


5) Random write latency (4 KB for single I/O):
fio -direct=1 -iodepth=1 -rw=randwrite -ioengine=libaio -bs=4k -size=1G -numjobs=1 -group_reporting -filename=/dev/zvol/rpool/vm-102-disk-4 -name=Rand_Write_Latency_Testing
Rand_Write_Latency_Testing: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.25
Starting 1 process
Jobs: 1 (f=1): [f(1)][100.0%][eta 00m:00s]                          
Rand_Write_Latency_Testing: (groupid=0, jobs=1): err= 0: pid=61639: Wed Feb 22 10:20:29 2023
  write: IOPS=12.2k, BW=47.8MiB/s (50.1MB/s)(1024MiB/21425msec); 0 zone resets
    slat (usec): min=3, max=638, avg= 7.53, stdev= 7.69
    clat (nsec): min=1620, max=2096.1k, avg=72279.61, stdev=61550.38
     lat (usec): min=17, max=2115, avg=80.05, stdev=61.91
    clat percentiles (usec):
     |  1.00th=[    4],  5.00th=[   18], 10.00th=[   18], 20.00th=[   19],
     | 30.00th=[   26], 40.00th=[   32], 50.00th=[   38], 60.00th=[   51],
     | 70.00th=[  125], 80.00th=[  137], 90.00th=[  169], 95.00th=[  180],
     | 99.00th=[  196], 99.50th=[  206], 99.90th=[  265], 99.95th=[  347],
     | 99.99th=[  594]
   bw (  KiB/s): min=20000, max=117760, per=99.15%, avg=48527.43, stdev=32207.42, samples=42
   iops        : min= 5000, max=29440, avg=12131.90, stdev=8051.82, samples=42
  lat (usec)   : 2=0.03%, 4=1.36%, 10=0.03%, 20=19.26%, 50=38.96%
  lat (usec)   : 100=4.53%, 250=35.70%, 500=0.09%, 750=0.02%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%
  cpu          : usr=7.10%, sys=11.10%, ctx=264987, majf=1, minf=12
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=47.8MiB/s (50.1MB/s), 47.8MiB/s-47.8MiB/s (50.1MB/s-50.1MB/s), io=1024MiB (1074MB), run=21425-21425msec
 

6) Random read latency (4KB for single I/O):
fio -direct=1 -iodepth=1 -rw=randread -ioengine=libaio -bs=4k -size=1G -numjobs=1 -group_reporting -filename=/dev/zvol/rpool/vm-102-disk-4 -name=Rand_Read_Latency_Testingrandwrite
Rand_Read_Latency_Testingrandwrite: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.25
Starting 1 process
Jobs: 1 (f=0): [f(1)][100.0%][r=125MiB/s][r=32.1k IOPS][eta 00m:00s]
Rand_Read_Latency_Testingrandwrite: (groupid=0, jobs=1): err= 0: pid=103088: Wed Feb 22 10:23:33 2023
  read: IOPS=29.7k, BW=116MiB/s (122MB/s)(1024MiB/8813msec)
    slat (usec): min=2, max=476, avg= 5.53, stdev= 1.73
    clat (nsec): min=2000, max=933261, avg=26764.69, stdev=9414.20
     lat (usec): min=13, max=939, avg=32.48, stdev=10.03
    clat percentiles (usec):
     |  1.00th=[   12],  5.00th=[   14], 10.00th=[   16], 20.00th=[   21],
     | 30.00th=[   23], 40.00th=[   25], 50.00th=[   26], 60.00th=[   30],
     | 70.00th=[   31], 80.00th=[   31], 90.00th=[   40], 95.00th=[   42],
     | 99.00th=[   49], 99.50th=[   52], 99.90th=[   61], 99.95th=[   72],
     | 99.99th=[  186]
   bw (  KiB/s): min=73952, max=147760, per=99.13%, avg=117947.76, stdev=19614.61, samples=17
   iops        : min=18488, max=36940, avg=29486.94, stdev=4903.65, samples=17
  lat (usec)   : 4=0.01%, 10=0.01%, 20=16.34%, 50=82.97%, 100=0.65%
  lat (usec)   : 250=0.03%, 500=0.01%, 750=0.01%, 1000=0.01%
  cpu          : usr=12.23%, sys=23.63%, ctx=262146, majf=0, minf=11
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=262144,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=116MiB/s (122MB/s), 116MiB/s-116MiB/s (122MB/s-122MB/s), io=1024MiB (1074MB), run=8813-8813msec
 
Guest VM FIO TESTS using filename=/tmp/fio_test

Code:
1)
Random write IOPS (4 KB for single I/O):

fio -direct=1 -iodepth=128 -rw=randwrite -ioengine=posixaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/tmp/fio_test  -name=Rand_Write_Testing
Rand_Write_Testing: (g=0): rw=randwrite, bs=4096B-4096B,4096B-4096B,4096B-4096B, ioengine=posixaio, iodepth=128
fio-2.18
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=29.6MiB/s][r=0,w=7552 IOPS][eta 00m:00s]
Rand_Write_Testing: (groupid=0, jobs=1): err= 0: pid=5026: Wed Feb 22 10:55:33 2023
  write: IOPS=6768, BW=26.5MiB/s (27.8MB/s)(1024MiB/38731msec)
    slat (usec): min=0, max=1887, avg= 1.54, stdev= 3.87
    clat (usec): min=788, max=99606, avg=18572.28, stdev=4398.05
     lat (usec): min=790, max=99609, avg=18573.82, stdev=4398.11
    clat percentiles (usec):
     |  1.00th=[13632],  5.00th=[15040], 10.00th=[15680], 20.00th=[16320],
     | 30.00th=[16768], 40.00th=[17536], 50.00th=[17792], 60.00th=[18304],
     | 70.00th=[19072], 80.00th=[19840], 90.00th=[21376], 95.00th=[23168],
     | 99.00th=[36608], 99.50th=[42752], 99.90th=[66048], 99.95th=[75264],
     | 99.99th=[99840]
    lat (usec) : 1000=0.01%
    lat (msec) : 2=0.01%, 10=0.03%, 20=81.87%, 50=17.79%, 100=0.30%
  cpu          : usr=4.34%, sys=17.47%, ctx=271513, majf=0, minf=56
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=5.2%, 16=12.5%, 32=25.0%, >=64=57.2%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=98.9%, 8=0.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7%
     issued rwt: total=0,262144,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=128

Run status group 0 (all jobs):
  WRITE: bw=26.5MiB/s (27.8MB/s), 26.5MiB/s-26.5MiB/s (27.8MB/s-27.8MB/s), io=1024MiB (1074MB), run=38731-38731msec

Disk stats (read/write):
  vda: ios=0/261608, merge=0/153, ticks=0/31484, in_queue=31212, util=77.35%

2)
Random read IOPS (4KB for single I/O):
fio -direct=1 -iodepth=128 -rw=randread -ioengine=posixaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/tmp/fio-test -name=Rand_Read_Testing
Rand_Read_Testing: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=posixaio, iodepth=128
fio-3.25
Starting 1 process
Jobs: 1 (f=1): [r(1)][92.6%][r=127MiB/s][r=32.5k IOPS][eta 00m:05s]
Rand_Read_Testing: (groupid=0, jobs=1): err= 0: pid=2865041: Wed Feb 22 10:05:13 2023
  read: IOPS=4140, BW=16.2MiB/s (16.0MB/s)(1024MiB/63315msec)
    slat (usec): min=3, max=1154, avg= 6.15, stdev= 4.66
    clat (usec): min=585, max=326092, avg=30906.47, stdev=34882.82
     lat (usec): min=589, max=326098, avg=30912.86, stdev=34884.17
    clat percentiles (usec):
     |  1.00th=[   660],  5.00th=[  1106], 10.00th=[  1844], 20.00th=[  3621],
     | 30.00th=[  5997], 40.00th=[  9634], 50.00th=[ 15533], 60.00th=[ 23987],
     | 70.00th=[ 39060], 80.00th=[ 58983], 90.00th=[ 86508], 95.00th=[104334],
     | 99.00th=[135267], 99.50th=[152044], 99.90th=[185598], 99.95th=[202376],
     | 99.99th=[246416]
   bw (  KiB/s): min= 4224, max=163768, per=95.40%, avg=15799.68, stdev=22701.83, samples=126
   iops        : min= 1056, max=40944, avg=3949.94, stdev=5675.56, samples=126
  lat (usec)   : 750=2.31%, 1000=1.83%
  lat (msec)   : 2=6.73%, 4=11.15%, 10=18.86%, 20=14.92%, 50=20.27%
  lat (msec)   : 100=17.81%, 250=6.10%, 500=0.01%
  cpu          : usr=2.17%, sys=3.40%, ctx=162876, majf=0, minf=139
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
     issued rwts: total=262144,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=128

Run status group 0 (all jobs):
   READ: bw=16.2MiB/s (16.0MB/s), 16.2MiB/s-16.2MiB/s (16.0MB/s-16.0MB/s), io=1024MiB (1074MB), run=63315-63315msec
 

3) Sequential write throughput (write bandwidth) (1024 KB for single I/O):
fio -direct=1 -iodepth=64 -rw=write -ioengine=posixaio -bs=1024k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/tmp/fio-test -name=Write_PPS_Testing
Write_PPS_Testing: (g=0): rw=write, bs=1024KiB-1024KiB,1024KiB-1024KiB,1024KiB-1024KiB, ioengine=posixaio, iodepth=64
fio-2.18
Starting 1 process
Write_PPS_Testing: Laying out IO file(s) (1 file(s) / 1024MiB)
Jobs: 1 (f=1)
Write_PPS_Testing: (groupid=0, jobs=1): err= 0: pid=3559: Wed Feb 22 11:23:03 2023
  write: IOPS=832, BW=833MiB/s (873MB/s)(1024MiB/1230msec)
    slat (usec): min=54, max=786, avg=105.49, stdev=32.39
    clat (msec): min=6, max=135, avg=73.27, stdev=16.17
     lat (msec): min=6, max=135, avg=73.38, stdev=16.18
    clat percentiles (msec):
     |  1.00th=[   59],  5.00th=[   61], 10.00th=[   63], 20.00th=[   66],
     | 30.00th=[   68], 40.00th=[   69], 50.00th=[   71], 60.00th=[   72],
     | 70.00th=[   73], 80.00th=[   75], 90.00th=[   86], 95.00th=[  125],
     | 99.00th=[  130], 99.50th=[  133], 99.90th=[  137], 99.95th=[  137],
     | 99.99th=[  137]
    lat (msec) : 10=0.49%, 100=92.77%, 250=6.74%
  cpu          : usr=9.44%, sys=11.39%, ctx=1085, majf=0, minf=55
  IO depths    : 1=0.1%, 2=0.2%, 4=0.4%, 8=12.0%, 16=25.0%, 32=55.3%, >=64=7.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=97.6%, 8=0.9%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0%
     issued rwt: total=0,1024,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  WRITE: bw=833MiB/s (873MB/s), 833MiB/s-833MiB/s (873MB/s-873MB/s), io=1024MiB (1074MB), run=1230-1230msec

Disk stats (read/write):
  vda: ios=0/2096, merge=0/2278, ticks=0/2044, in_queue=2036, util=74.72%

4) Sequential read throughput (read bandwidth) (1024 KB for single I/O):
fio -direct=1 -iodepth=64 -rw=read -ioengine=posixaio -bs=1024k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/tmp/fio-test -name=Read_PPS_Testing

Read_PPS_Testing: (g=0): rw=read, bs=1024KiB-1024KiB,1024KiB-1024KiB,1024KiB-1024KiB, ioengine=posixaio, iodepth=64
fio-2.18
Starting 1 process
Jobs: 1 (f=1)
Read_PPS_Testing: (groupid=0, jobs=1): err= 0: pid=3565: Wed Feb 22 11:23:46 2023
   read: IOPS=375, BW=376MiB/s (394MB/s)(1024MiB/2725msec)
    slat (usec): min=0, max=21, avg= 0.85, stdev= 0.80
    clat (msec): min=150, max=282, avg=169.23, stdev=23.92
     lat (msec): min=150, max=282, avg=169.23, stdev=23.92
    clat percentiles (msec):
     |  1.00th=[  153],  5.00th=[  153], 10.00th=[  153], 20.00th=[  155],
     | 30.00th=[  157], 40.00th=[  161], 50.00th=[  163], 60.00th=[  167],
     | 70.00th=[  172], 80.00th=[  176], 90.00th=[  180], 95.00th=[  253],
     | 99.00th=[  253], 99.50th=[  262], 99.90th=[  277], 99.95th=[  281],
     | 99.99th=[  281]
    lat (msec) : 250=93.75%, 500=6.25%
  cpu          : usr=0.00%, sys=11.31%, ctx=1155, majf=0, minf=16443
  IO depths    : 1=0.1%, 2=0.2%, 4=0.4%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=11.8%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.4%, >=64=0.0%
     issued rwt: total=1024,0,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=376MiB/s (394MB/s), 376MiB/s-376MiB/s (394MB/s-394MB/s), io=1024MiB (1074MB), run=2725-2725msec

Disk stats (read/write):
  vda: ios=2281/0, merge=2478/0, ticks=4948/0, in_queue=4936, util=88.92%

5) Random write latency (4 KB for single I/O):
fio -direct=1 -iodepth=1 -rw=randwrite -ioengine=posixaio -bs=4k -size=1G -numjobs=1 -group_reporting -filename=/tmp/fio-test -name=Rand_Write_Latency_Testing
Rand_Write_Latency_Testing: (g=0): rw=randwrite, bs=4096B-4096B,4096B-4096B,4096B-4096B, ioengine=posixaio, iodepth=1
fio-2.18
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=17.5MiB/s][r=0,w=4467 IOPS][eta 00m:00s]
Rand_Write_Latency_Testing: (groupid=0, jobs=1): err= 0: pid=3588: Wed Feb 22 11:25:16 2023
  write: IOPS=4200, BW=16.5MiB/s (17.3MB/s)(1024MiB/62409msec)
    slat (usec): min=0, max=1308, avg= 1.52, stdev= 3.05
    clat (usec): min=135, max=39249, avg=230.73, stdev=227.58
     lat (usec): min=136, max=39260, avg=232.25, stdev=227.77
    clat percentiles (usec):
     |  1.00th=[  155],  5.00th=[  161], 10.00th=[  169], 20.00th=[  181],
     | 30.00th=[  191], 40.00th=[  199], 50.00th=[  209], 60.00th=[  219],
     | 70.00th=[  231], 80.00th=[  247], 90.00th=[  286], 95.00th=[  358],
     | 99.00th=[  636], 99.50th=[  772], 99.90th=[ 1160], 99.95th=[ 1784],
     | 99.99th=[ 8512]
    lat (usec) : 250=81.40%, 500=16.58%, 750=1.46%, 1000=0.39%
    lat (msec) : 2=0.13%, 4=0.02%, 10=0.02%, 20=0.01%, 50=0.01%
  cpu          : usr=11.14%, sys=43.14%, ctx=787371, majf=0, minf=57
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=0,262144,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=16.5MiB/s (17.3MB/s), 16.5MiB/s-16.5MiB/s (17.3MB/s-17.3MB/s), io=1024MiB (1074MB), run=62409-62409msec

Disk stats (read/write):
  vda: ios=5/262233, merge=0/180, ticks=0/36480, in_queue=36152, util=52.02%

 
6) Random read latency (4KB for single I/O):
fio -direct=1 -iodepth=1 -rw=randread -ioengine=posixaio -bs=4k -size=1G -numjobs=1 -group_reporting -filename=/tmp/fio-test -name=Rand_Read_Latency_Testingrandwrite
Rand_Read_Latency_Testingrandwrite: (g=0): rw=randread, bs=4096B-4096B,4096B-4096B,4096B-4096B, ioengine=posixaio, iodepth=1
fio-2.18
Starting 1 process
Jobs: 1 (f=1): [r(1)][100.0%][r=19.1MiB/s,w=0KiB/s][r=5090,w=0 IOPS][eta 00m:00s]
Rand_Read_Latency_Testingrandwrite: (groupid=0, jobs=1): err= 0: pid=3634: Wed Feb 22 11:26:47 2023
   read: IOPS=4548, BW=17.8MiB/s (18.7MB/s)(1024MiB/57631msec)
    slat (usec): min=0, max=464, avg= 1.09, stdev= 1.67
    clat (usec): min=130, max=8463, avg=213.24, stdev=81.38
     lat (usec): min=131, max=8463, avg=214.33, stdev=81.86
    clat percentiles (usec):
     |  1.00th=[  151],  5.00th=[  155], 10.00th=[  165], 20.00th=[  179],
     | 30.00th=[  185], 40.00th=[  193], 50.00th=[  201], 60.00th=[  207],
     | 70.00th=[  215], 80.00th=[  229], 90.00th=[  253], 95.00th=[  302],
     | 99.00th=[  564], 99.50th=[  652], 99.90th=[  828], 99.95th=[ 1012],
     | 99.99th=[ 2256]
    lat (usec) : 250=89.39%, 500=9.09%, 750=1.35%, 1000=0.12%
    lat (msec) : 2=0.04%, 4=0.01%, 10=0.01%
  cpu          : usr=11.00%, sys=42.51%, ctx=787154, majf=0, minf=61
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=262144,0,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=17.8MiB/s (18.7MB/s), 17.8MiB/s-17.8MiB/s (18.7MB/s-18.7MB/s), io=1024MiB (1074MB), run=57631-57631msec

Disk stats (read/write):
  vda: ios=261108/60, merge=0/88, ticks=30384/2928, in_queue=32988, util=53.15%
 
Guest VM FIO TESTS using filename=/shared/dd/fio-test

Code:
1)
Random write IOPS (4 KB for single I/O):
fio -direct=1 -iodepth=128 -rw=randwrite -ioengine=posixaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/shared/dd/fio-test   -name=Rand_Write_Testing
Rand_Write_Testing: (g=0): rw=randwrite, bs=4096B-4096B,4096B-4096B,4096B-4096B, ioengine=posixaio, iodepth=128
fio-2.18
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=15.6MiB/s][r=0,w=3971 IOPS][eta 00m:00s]
Rand_Write_Testing: (groupid=0, jobs=1): err= 0: pid=4195: Wed Feb 22 11:47:28 2023
  write: IOPS=4251, BW=16.7MiB/s (17.5MB/s)(1024MiB/61664msec)
    slat (usec): min=0, max=317, avg= 1.51, stdev= 1.51
    clat (usec): min=848, max=70348, avg=29773.70, stdev=3622.24
     lat (usec): min=850, max=70350, avg=29775.21, stdev=3622.28
    clat percentiles (usec):
     |  1.00th=[21888],  5.00th=[23936], 10.00th=[25984], 20.00th=[27520],
     | 30.00th=[28288], 40.00th=[29056], 50.00th=[29568], 60.00th=[30080],
     | 70.00th=[30848], 80.00th=[31616], 90.00th=[33536], 95.00th=[36096],
     | 99.00th=[42240], 99.50th=[43264], 99.90th=[47872], 99.95th=[54016],
     | 99.99th=[54528]
    lat (usec) : 1000=0.01%
    lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.48%, 50=99.46%
    lat (msec) : 100=0.06%
  cpu          : usr=4.73%, sys=11.82%, ctx=273865, majf=0, minf=56
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=4.5%, 16=12.5%, 32=25.0%, >=64=58.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.2%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7%
     issued rwt: total=0,262144,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=128

Run status group 0 (all jobs):
  WRITE: bw=16.7MiB/s (17.5MB/s), 16.7MiB/s-16.7MiB/s (17.5MB/s-17.5MB/s), io=1024MiB (1074MB), run=61664-61664msec

Disk stats (read/write):
    dm-0: ios=0/262093, merge=0/0, ticks=0/54084, in_queue=54076, util=85.49%, aggrios=0/262174, aggrmerge=0/10, aggrticks=0/29872, aggrin_queue=29596, aggrutil=47.35%
  vdc: ios=0/262174, merge=0/10, ticks=0/29872, in_queue=29596, util=47.35%

2)
Random read IOPS (4KB for single I/O):
fio -direct=1 -iodepth=128 -rw=randread -ioengine=posixaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/shared/dd/fio-test -name=Rand_Read_Testing
Rand_Read_Testing: (g=0): rw=randread, bs=4096B-4096B,4096B-4096B,4096B-4096B, ioengine=posixaio, iodepth=128
fio-2.18
Starting 1 process
Jobs: 1 (f=1): [r(1)][100.0%][r=16.6MiB/s,w=0KiB/s][r=4224,w=0 IOPS][eta 00m:00s]
Rand_Read_Testing: (groupid=0, jobs=1): err= 0: pid=4388: Wed Feb 22 11:55:38 2023
   read: IOPS=4075, BW=15.1MiB/s (16.7MB/s)(1024MiB/64320msec)
    slat (usec): min=0, max=465, avg= 0.84, stdev= 1.33
    clat (msec): min=22, max=74, avg=31.10, stdev= 3.91
     lat (msec): min=22, max=74, avg=31.10, stdev= 3.91
    clat percentiles (usec):
     |  1.00th=[25728],  5.00th=[26496], 10.00th=[27008], 20.00th=[28032],
     | 30.00th=[28800], 40.00th=[29312], 50.00th=[30080], 60.00th=[31104],
     | 70.00th=[32640], 80.00th=[34560], 90.00th=[36096], 95.00th=[37120],
     | 99.00th=[42752], 99.50th=[47360], 99.90th=[56064], 99.95th=[56064],
     | 99.99th=[73216]
    lat (msec) : 50=99.80%, 100=0.20%
  cpu          : usr=4.82%, sys=21.62%, ctx=279676, majf=0, minf=187
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=6.2%, 16=12.5%, 32=25.0%, >=64=56.2%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.3%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7%
     issued rwt: total=262144,0,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=128

Run status group 0 (all jobs):
   READ: bw=15.1MiB/s (16.7MB/s), 15.1MiB/s-15.1MiB/s (16.7MB/s-16.7MB/s), io=1024MiB (1074MB), run=64320-64320msec

Disk stats (read/write):
    dm-0: ios=261376/4, merge=0/0, ticks=52720/272, in_queue=53012, util=82.31%, aggrios=262144/4, aggrmerge=0/1, aggrticks=27796/192, aggrin_queue=27632, aggrutil=42.81%
  vdc: ios=262144/4, merge=0/1, ticks=27796/192, in_queue=27632, util=42.81%


3) Sequential write throughput (write bandwidth) (1024 KB for single I/O):
fio -direct=1 -iodepth=64 -rw=write -ioengine=posixaio -bs=1024k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/shared/dd/fio-test -name=Write_PPS_Testing
Write_PPS_Testing: (g=0): rw=write, bs=1024KiB-1024KiB,1024KiB-1024KiB,1024KiB-1024KiB, ioengine=posixaio, iodepth=64
fio-2.18
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][r=0KiB/s,w=64.0MiB/s][r=0,w=64 IOPS][eta 00m:00s]
Write_PPS_Testing: (groupid=0, jobs=1): err= 0: pid=4477: Wed Feb 22 11:58:04 2023
  write: IOPS=71, BW=71.7MiB/s (75.2MB/s)(1024MiB/14297msec)
    slat (usec): min=41, max=686, avg=98.34, stdev=29.22
    clat (msec): min=453, max=1291, avg=886.03, stdev=281.97
     lat (msec): min=453, max=1291, avg=886.13, stdev=281.97
    clat percentiles (msec):
     |  1.00th=[  457],  5.00th=[  457], 10.00th=[  461], 20.00th=[  537],
     | 30.00th=[  570], 40.00th=[  873], 50.00th=[ 1037], 60.00th=[ 1057],
     | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1237], 95.00th=[ 1237],
     | 99.00th=[ 1254], 99.50th=[ 1270], 99.90th=[ 1287], 99.95th=[ 1287],
     | 99.99th=[ 1287]
    lat (msec) : 500=18.65%, 750=12.60%, 1000=18.36%, 2000=50.39%
  cpu          : usr=0.90%, sys=0.78%, ctx=1161, majf=0, minf=54
  IO depths    : 1=0.1%, 2=0.2%, 4=0.4%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=11.8%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.4%, >=64=0.0%
     issued rwt: total=0,1024,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  WRITE: bw=71.7MiB/s (75.2MB/s), 71.7MiB/s-71.7MiB/s (75.2MB/s-75.2MB/s), io=1024MiB (1074MB), run=14297-14297msec

Disk stats (read/write):
    dm-0: ios=0/4860, merge=0/0, ticks=0/23416, in_queue=23424, util=97.87%, aggrios=0/4879, aggrmerge=0/2, aggrticks=0/1432, aggrin_queue=1424, aggrutil=9.41%
  vdc: ios=0/4879, merge=0/2, ticks=0/1432, in_queue=1424, util=9.41%

4) Sequential read throughput (read bandwidth) (1024 KB for single I/O):
fio -direct=1 -iodepth=64 -rw=read -ioengine=posixaio -bs=1024k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/shared/dd/fio-test -name=Read_PPS_Testing
Read_PPS_Testing: (g=0): rw=read, bs=1024KiB-1024KiB,1024KiB-1024KiB,1024KiB-1024KiB, ioengine=posixaio, iodepth=64
fio-2.18
Starting 1 process

Jobs: 1 (f=1): [R(1)][100.0%][r=67.7MiB/s,w=0KiB/s][r=67,w=0 IOPS][eta 00m:00s]
Read_PPS_Testing: (groupid=0, jobs=1): err= 0: pid=4846: Wed Feb 22 12:00:42 2023
   read: IOPS=74, BW=74.7MiB/s (78.3MB/s)(1024MiB/13725msec)
    slat (usec): min=0, max=2, avg= 0.69, stdev= 0.53
    clat (msec): min=478, max=1103, avg=852.94, stdev=147.81
     lat (msec): min=478, max=1103, avg=852.94, stdev=147.81
    clat percentiles (msec):
     |  1.00th=[  490],  5.00th=[  490], 10.00th=[  660], 20.00th=[  676],
     | 30.00th=[  865], 40.00th=[  873], 50.00th=[  889], 60.00th=[  898],
     | 70.00th=[  938], 80.00th=[  996], 90.00th=[ 1004], 95.00th=[ 1004],
     | 99.00th=[ 1029], 99.50th=[ 1057], 99.90th=[ 1090], 99.95th=[ 1106],
     | 99.99th=[ 1106]
    lat (msec) : 500=6.25%, 750=18.75%, 1000=62.50%, 2000=12.50%
  cpu          : usr=0.03%, sys=3.03%, ctx=1160, majf=0, minf=16444
  IO depths    : 1=0.1%, 2=0.2%, 4=0.4%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=11.8%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.4%, >=64=0.0%
     issued rwt: total=1024,0,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=74.7MiB/s (78.3MB/s), 74.7MiB/s-74.7MiB/s (78.3MB/s-78.3MB/s), io=1024MiB (1074MB), run=13725-13725msec

Disk stats (read/write):
    dm-0: ios=4850/4, merge=0/0, ticks=23384/8, in_queue=23400, util=97.40%, aggrios=2313/5, aggrmerge=2560/0, aggrticks=2960/0, aggrin_queue=2948, aggrutil=10.89%
  vdc: ios=2313/5, merge=2560/0, ticks=2960/0, in_queue=2948, util=10.89%

5) Random write latency (4 KB for single I/O):
fio -direct=1 -iodepth=1 -rw=randwrite -ioengine=posixaio -bs=4k -size=1G -numjobs=1 -group_reporting -filename=/shared/dd/fio-test -name=Rand_Write_Latency_Testing
Rand_Write_Latency_Testing: (g=0): rw=randwrite, bs=4096B-4096B,4096B-4096B,4096B-4096B, ioengine=posixaio, iodepth=1
fio-2.18
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=11.8MiB/s][r=0,w=3012 IOPS][eta 00m:00s]
Rand_Write_Latency_Testing: (groupid=0, jobs=1): err= 0: pid=4851: Wed Feb 22 12:02:29 2023
  write: IOPS=2948, BW=11.6MiB/s (12.8MB/s)(1024MiB/88896msec)
    slat (usec): min=1, max=296, avg= 1.49, stdev= 1.29
    clat (usec): min=185, max=11057, avg=331.38, stdev=106.51
     lat (usec): min=187, max=11059, avg=332.87, stdev=106.73
    clat percentiles (usec):
     |  1.00th=[  241],  5.00th=[  258], 10.00th=[  266], 20.00th=[  278],
     | 30.00th=[  286], 40.00th=[  298], 50.00th=[  310], 60.00th=[  326],
     | 70.00th=[  342], 80.00th=[  366], 90.00th=[  414], 95.00th=[  458],
     | 99.00th=[  732], 99.50th=[  860], 99.90th=[ 1240], 99.95th=[ 1560],
     | 99.99th=[ 2992]
    lat (usec) : 250=2.71%, 500=94.56%, 750=1.81%, 1000=0.68%
    lat (msec) : 2=0.21%, 4=0.02%, 10=0.01%, 20=0.01%
  cpu          : usr=9.48%, sys=28.66%, ctx=787187, majf=0, minf=56
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=0,262144,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=11.6MiB/s (12.8MB/s), 11.6MiB/s-11.6MiB/s (12.8MB/s-12.8MB/s), io=1024MiB (1074MB), run=88896-88896msec

Disk stats (read/write):
    dm-0: ios=0/262179, merge=0/0, ticks=0/59876, in_queue=59872, util=64.76%, aggrios=0/262186, aggrmerge=0/14, aggrticks=0/35960, aggrin_queue=35640, aggrutil=39.61%
  vdc: ios=0/262186, merge=0/14, ticks=0/35960, in_queue=35640, util=39.61%

 
6) Random read latency (4KB for single I/O):
fio -direct=1 -iodepth=1 -rw=randread -ioengine=posixaio -bs=4k -size=1G -numjobs=1 -group_reporting -filename=/shared/dd/fio-test -name=Rand_Read_Latency_Testingrandwrite
Rand_Read_Latency_Testingrandwrite: (g=0): rw=randread, bs=4096B-4096B,4096B-4096B,4096B-4096B, ioengine=posixaio, iodepth=1
fio-2.18
Starting 1 process
Jobs: 1 (f=1): [r(1)][100.0%][r=11.5MiB/s,w=0KiB/s][r=2935,w=0 IOPS][eta 00m:00s]
Rand_Read_Latency_Testingrandwrite: (groupid=0, jobs=1): err= 0: pid=5098: Wed Feb 22 12:11:56 2023
   read: IOPS=2830, BW=11.6MiB/s (11.6MB/s)(1024MiB/92620msec)
    slat (usec): min=0, max=70, avg= 1.18, stdev= 0.72
    clat (usec): min=223, max=8716, avg=345.42, stdev=92.53
     lat (usec): min=224, max=8718, avg=346.60, stdev=92.65
    clat percentiles (usec):
     |  1.00th=[  253],  5.00th=[  266], 10.00th=[  274], 20.00th=[  286],
     | 30.00th=[  298], 40.00th=[  310], 50.00th=[  322], 60.00th=[  338],
     | 70.00th=[  358], 80.00th=[  394], 90.00th=[  454], 95.00th=[  486],
     | 99.00th=[  596], 99.50th=[  724], 99.90th=[ 1144], 99.95th=[ 1400],
     | 99.99th=[ 2768]
    lat (usec) : 250=0.57%, 500=95.58%, 750=3.37%, 1000=0.31%
    lat (msec) : 2=0.15%, 4=0.02%, 10=0.01%
  cpu          : usr=9.81%, sys=34.57%, ctx=787070, majf=0, minf=61
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=262144,0,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=11.6MiB/s (11.6MB/s), 11.6MiB/s-11.6MiB/s (11.6MB/s-11.6MB/s), io=1024MiB (1074MB), run=92620-92620msec

Disk stats (read/write):
    dm-0: ios=261495/4, merge=0/0, ticks=60672/308, in_queue=60996, util=65.79%, aggrios=262144/4, aggrmerge=0/1, aggrticks=35768/196, aggrin_queue=35620, aggrutil=38.39%
  vdc: ios=262144/4, merge=0/1, ticks=35768/196, in_queue=35620, util=38.39%
 
The tests actually tested on pgdata

Why are you now benchmarking a (completely?) different storage, when your problematic VM has its vdisks on: local-zfs and backup-zfs?
I assumed, your "running fine" VMs are using the same storage as the problematic one, no?
Otherwise it is not comparable at all or at least not a really good comparison, depending on what storage pgdata exactly is...

Do you have VMs, that are running fine on: local-zfs and/or backup-zfs at all? (At least this is, what it sounded to me, respectively what I assumed, the whole time.)

My point was to have comparable results from the problematic VM and a "running fine" VM, that (I thought) both use the same storages and then trying to find out, what the differences (e.g. with the exact OS used, which you also did not mention for the "running fine" VMs) might be...

If the problematic VM is the only one on: local-zfs and backup-zfs, then I would suggest to create or better clone/move a known "running fine" VM from e.g.: pgdata to: local-zfs and/or backup-zfs and benchmark it on this storage again.

Good luck.
 
Gday,

None of them are working smoothly or fast. pgdata is an attempt dataset over rpool with 16k block to check if this have any affect. But as you can see from the benchmark I do not get the expected speeds. Keep in mind that fio tests performed both inside vm and also within pve. What I need is to figure out where is the bottleneck and why there are gaps.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!