ZFS over ISCSI + Omnios problem

ZS-Man

Renowned Member
Oct 18, 2013
24
0
66
Hi all,
I am trying connect Omnios as shared ISCSI storage to Proxmox. Storage is connected and online, I can create VM, but booting VM freeze at [sda] tag#0 abort - (screenshot attached) Same problem if I add new disk to working VM.

On storage ISCSI target is created:
root@san1-vos:~# itadm list-target -v
TARGET NAME STATE SESSIONS
iqn.2010-09.org.zfs-app:px online 1
alias: san1-vos
auth: none (defaults)
targetchapuser: -
targetchapsecret: unset
tpg-tags: tgp1 = 2
root@san1-vos:~#


I see created VM disk:
root@san1-vos:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool1 114G 3,90T 88K /pool1
pool1/vm-108-disk-0 33,0G 3,86T 56K -

On proxmox:
root@pve2:~# pvesm scan iscsi 10.0.1.130
iqn.2010-09.org.zfs-app:px 10.0.1.130:3260

Proxmox storage.cfg:
zfs: SAN1-ISCSI
blocksize 8k
iscsiprovider comstar
pool pool1
portal 10.0.1.130
target iqn.2010-09.org.zfs-app:px
content images
nodes pve1,pve2
nowritecache 0
sparse 0

I can create new disk, delete disk. If I try migrate from another storage to this new ISCSI, I get error:

create full clone of drive scsi0 (AAA:107/vm-107-disk-1.raw)
transferred: 0 bytes remaining: 53687091200 bytes total: 53687091200 bytes progression: 0.00 %
qemu-img: iSCSI: NOP timeout. Reconnecting...
qemu-img: iSCSI: NOP timeout. Reconnecting...

This is VM config:
root@pve2:~# cat /etc/pve/nodes/pve2/qemu-server/108.conf
bootdisk: scsi0
cores: 1
ide2: ISO:iso/debian-9.3.0-amd64-netinst.iso,media=cdrom
memory: 512
name: iscsi-test
net0: virtio=C6:E5:B2:95:62:75,bridge=vmbr0
numa: 0
ostype: l26
scsi0: SAN1-ISCSI:vm-108-disk-0,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=5315b62b-8911-4dd3-bb67-dc51b94b9668
sockets: 1
vga: qxl,memory=16
vmgenid: aba3d35a-cf0a-4f99-a84d-772ee16a47f4
root@pve2:~#


On Windows10 VM boot, new added disk is detected, but I am not able format it. After reset Win10 newer finish boot.


On Proxmox Wiki https://pve.proxmox.com/wiki/Iscsi/nappit#comstar_for_kvm is wrote:
4. portal group to ISCSI target: Without portal group, ISCSI target are published on every valid IP interface Comstar > Target Portal Groups > add member. Select portal group from above.

But in menu Comstar > Target Portal Groups is not any "add member" choice..... Is tihis problem?

Thanks for any help.
 

Attachments

  • VM-install.png
    VM-install.png
    74.2 KB · Views: 7
Seems solved. Bad settings of MTU on storage network interface. Now Iscsi is connected and working.
But too slow.... (compared to NFS on same storage server)
 
Seems solved. Bad settings of MTU on storage network interface. Now Iscsi is connected and working.
But too slow.... (compared to NFS on same storage server)
What hardware (cpu, ram, nics, disks) is on the omnios server?
And how is the zfs pool configured?
 
Hi,
Omnios is on Supermicro server:
1x E5-1630 v4 @ 3.70GHz
128 GB RAM
10x HDD Toshiba, 900GB, 10krpm, SAS3, 512e
Dualport 10Gbit AOC-STGS-I2T (Intel X550) for storage network
LSI HBA 9305-16i
Intel Optane 900p - partitioned for SLOG and L2ARC

For now I use only one eth link, no bond, no multipath.

Code:
root@san1-vos:~# zpool status
  pool: pool1
 state: ONLINE
  scan: none requested
config:

        NAME                        STATE     READ WRITE CKSUM
        pool1                       ONLINE       0     0     0
          mirror-0                  ONLINE       0     0     0
            c1t50000398E801CBB6d0   ONLINE       0     0     0
            c2t50000398E801D14Ad0   ONLINE       0     0     0
          mirror-1                  ONLINE       0     0     0
            c3t50000398E801CE5Ed0   ONLINE       0     0     0
            c4t50000398E801CE46d0   ONLINE       0     0     0
          mirror-2                  ONLINE       0     0     0
            c5t50000398E801CE5Ad0   ONLINE       0     0     0
            c6t50000398E801CBBAd0   ONLINE       0     0     0
          mirror-3                  ONLINE       0     0     0
            c7t50000398E801CF62d0   ONLINE       0     0     0
            c8t50000398E801D0EEd0   ONLINE       0     0     0
          mirror-4                  ONLINE       0     0     0
            c10t50000398E801D136d0  ONLINE       0     0     0
            c9t50000398E801CEFAd0   ONLINE       0     0     0
        logs
          c11t1d0p1                 ONLINE       0     0     0
        cache
          c11t1d0p2                 ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          c13t0d0   ONLINE       0     0     0

errors: No known data errors

Atime a dedup is OFF.

I am trying diferent zfs sync mode, VM disc cache type, but still big difference betwen NFS and iSCSI.

There is VM.conf:
Code:
agent: 1
balloon: 0
boot: cdn
bootdisk: virtio0
cores: 4
cpu: kvm64,flags=+pcid
ide0: ISO:iso/virtio-win-0.1.160.iso,media=cdrom,size=315276K
ide2: ISO:iso/Win10_1803_Czech_x64.iso,media=cdrom,size=4463976K
memory: 4096
name: W10Admin
net0: virtio=BA:28:88:CF:1E:EB,bridge=vmbr0
numa: 0
onboot: 1
ostype: win10
scsi1: SAN1-NFS:104/vm-104-disk-0.raw,cache=writeback,size=10G
scsi2: SAN1-ISCSI:vm-104-disk-0,cache=directsync,size=15G
scsihw: virtio-scsi-pci
smbios1: uuid=6deab8a1-fba0-45ce-b23c-87571bcf1c7b
sockets: 1
usb0: spice
usb1: spice
vga: qxl
virtio0: AAA:104/vm-104-disk-1.raw,cache=writeback,size=100G

Attached screenshot from WIN10 ATTO benchmark, drive F: on NFS, drive G: on iSCSI.

Later I will add any fio bench from linux VM (have you any recommanded fio parameters ?)

Thanks.
 

Attachments

  • ATTO-Win10.png
    ATTO-Win10.png
    47 KB · Views: 15
Try using fio for disk benchmark using this (copy below as input to a file to use as input to fio)
Code:
# This job file tries to mimic the Intel IOMeter File Server Access Pattern
[global]
description=Emulation of Intel IOmeter File Server Access Pattern

[iometer]
bssplit=512/10:1k/5:2k/5:4k/60:8k/2:16k/4:32k/4:64k/10
rw=randrw
rwmixread=80
direct=1
size=4g
ioengine=libaio
# IOMeter defines the server loads as the following:
# iodepth=1    Linear
# iodepth=4    Very Light
# iodepth=8    Light
# iodepth=64    Moderate
# iodepth=256    Heavy
iodepth=64

If you run it on windows change ioengine to windowsaio
 
Code:
root@san1-vos:~# zpool get all pool1       
NAME   PROPERTY                       VALUE                          SOURCE
pool1  size                           4,06T                          -
pool1  capacity                       0%                             -
pool1  altroot                        -                              default
pool1  health                         ONLINE                         -
pool1  guid                           17794950970888167795           default
pool1  version                        -                              default
pool1  bootfs                         -                              default
pool1  delegation                     on                             default
pool1  autoreplace                    off                            default
pool1  cachefile                      -                              default
pool1  failmode                       wait                           default
pool1  listsnapshots                  off                            default
pool1  autoexpand                     off                            default
pool1  dedupditto                     0                              default
pool1  dedupratio                     1.00x                          -
pool1  free                           4,05T                          -
pool1  allocated                      8,14G                          -
pool1  readonly                       off                            -
pool1  comment                        -                              default
pool1  expandsize                     -                              -
pool1  freeing                        0                              default
pool1  fragmentation                  0%                             -
pool1  leaked                         0                              default
pool1  bootsize                       -                              default
pool1  checkpoint                     -                              -
pool1  feature@async_destroy          enabled                        local
pool1  feature@empty_bpobj            active                         local
pool1  feature@lz4_compress           active                         local
pool1  feature@multi_vdev_crash_dump  enabled                        local
pool1  feature@spacemap_histogram     active                         local
pool1  feature@enabled_txg            active                         local
pool1  feature@hole_birth             active                         local
pool1  feature@extensible_dataset     enabled                        local
pool1  feature@embedded_data          active                         local
pool1  feature@bookmarks              enabled                        local
pool1  feature@filesystem_limits      enabled                        local
pool1  feature@large_blocks           enabled                        local
pool1  feature@sha512                 enabled                        local
pool1  feature@skein                  enabled                        local
pool1  feature@edonr                  enabled                        local
pool1  feature@device_removal         enabled                        local
pool1  feature@obsolete_counts        enabled                        local
pool1  feature@zpool_checkpoint       enabled                        local
pool1  feature@spacemap_v2            active                         local
root@san1-vos:~#

Code:
root@san1-vos:~# zfs get all pool1         
NAME   PROPERTY              VALUE                  SOURCE
pool1  type                  filesystem             -
pool1  creation              st úno 13 16:09 2019   -
pool1  used                  126G                   -
pool1  available             3,89T                  -
pool1  referenced            88K                    -
pool1  compressratio         1.00x                  -
pool1  mounted               yes                    -
pool1  quota                 none                   default
pool1  reservation           none                   default
pool1  recordsize            128K                   default
pool1  mountpoint            /pool1                 default
pool1  sharenfs              off                    default
pool1  checksum              on                     default
pool1  compression           lz4                    local
pool1  atime                 off                    local
pool1  devices               on                     default
pool1  exec                  on                     default
pool1  setuid                on                     default
pool1  readonly              off                    default
pool1  zoned                 off                    default
pool1  snapdir               hidden                 default
pool1  aclmode               passthrough            local
pool1  aclinherit            passthrough            local
pool1  createtxg             1                      -
pool1  canmount              on                     default
pool1  xattr                 on                     default
pool1  copies                1                      default
pool1  version               5                      -
pool1  utf8only              off                    -
pool1  normalization         none                   -
pool1  casesensitivity       sensitive              -
pool1  vscan                 off                    default
pool1  nbmand                off                    default
pool1  sharesmb              off                    default
pool1  refquota              none                   default
pool1  refreservation        80,6G                  local
pool1  guid                  8968636203282786161    -
pool1  primarycache          all                    default
pool1  secondarycache        all                    default
pool1  usedbysnapshots       0                      -
pool1  usedbydataset         88K                    -
pool1  usedbychildren        45,3G                  -
pool1  usedbyrefreservation  80,6G                  -
pool1  logbias               latency                default
pool1  dedup                 off                    default
pool1  mlslabel              none                   default
pool1  sync                  always                 local
pool1  refcompressratio      1.00x                  -
pool1  written               88K                    -
pool1  logicalused           8,14G                  -
pool1  logicalreferenced     36,5K                  -
pool1  filesystem_limit      none                   default
pool1  snapshot_limit        none                   default
pool1  filesystem_count      none                   default
pool1  snapshot_count        none                   default
pool1  redundant_metadata    all                    default
root@san1-vos:~#

For fio test I have clean instaled Ubuntu VM. Boot disk is on my old NFS storage, then add 2 disk, format ext4 and mount it. One from omnios nfs, second from omnios iscsi .

NFS, cache writeback:
zpool iostat -v pool1 3
Code:
                               capacity     operations    bandwidth
pool                        alloc   free   read  write   read  write
--------------------------  -----  -----  -----  -----  -----  -----
pool1                       8,33G  4,05T      0  5,82K      0   599M
  mirror                    1,66G   830G      0    699      0  83,2M
    c1t50000398E801CBB6d0       -      -      0    679      0  83,2M
    c2t50000398E801D14Ad0       -      -      0    680      0  83,2M
  mirror                    1,65G   830G      0    670      0  79,5M
    c3t50000398E801CE5Ed0       -      -      0    651      0  79,6M
    c4t50000398E801CE46d0       -      -      0    651      0  79,6M
  mirror                    1,75G   830G      0    778      0  85,0M
    c5t50000398E801CE5Ad0       -      -      0    704      0  85,0M
    c6t50000398E801CBBAd0       -      -      0    704      0  85,0M
  mirror                    1,71G   830G      0    828      0  95,1M
    c7t50000398E801CF62d0       -      -      0    776      0  95,1M
    c8t50000398E801D0EEd0       -      -      0    776      0  95,1M
  mirror                    1,56G   830G      0    876      0   101M
    c10t50000398E801D136d0      -      -      0    828      0   102M
    c9t50000398E801CEFAd0       -      -      0    827      0   101M
logs                            -      -      -      -      -      -
  c11t1d0p1                  156M  9,85G      0  2,05K      0   154M
cache                           -      -      -      -      -      -
  c11t1d0p2                 2,77G   203G      0     42      0  5,32M
--------------------------  -----  -----  -----  -----  -----  -----

                               capacity     operations    bandwidth
pool                        alloc   free   read  write   read  write
--------------------------  -----  -----  -----  -----  -----  -----
pool1                       10,8G  4,05T      0  4,86K      0   532M
  mirror                    2,10G   830G      0    771      0  90,5M
    c1t50000398E801CBB6d0       -      -      0    741      0  90,5M
    c2t50000398E801D14Ad0       -      -      0    742      0  90,5M
  mirror                    2,06G   830G      0    735      0  85,7M
    c3t50000398E801CE5Ed0       -      -      0    703      0  85,8M
    c4t50000398E801CE46d0       -      -      0    703      0  85,8M
  mirror                    2,20G   830G      0    801      0  93,2M
    c5t50000398E801CE5Ad0       -      -      0    769      0  93,2M
    c6t50000398E801CBBAd0       -      -      0    769      0  93,2M
  mirror                    2,29G   830G      0    947      0   116M
    c7t50000398E801CF62d0       -      -      0    941      0   116M
    c8t50000398E801D0EEd0       -      -      0    942      0   116M
  mirror                    2,15G   830G      0    975      0   120M
    c10t50000398E801D136d0      -      -      0    968      0   120M
    c9t50000398E801CEFAd0       -      -      0    969      0   120M
logs                            -      -      -      -      -      -
  c11t1d0p1                  628M  9,39G      0    750      0  25,9M
cache                           -      -      -      -      -      -
  c11t1d0p2                 2,84G   203G      0    266      0  32,7M
--------------------------  -----  -----  -----  -----  -----  -----

                               capacity     operations    bandwidth
pool                        alloc   free   read  write   read  write
--------------------------  -----  -----  -----  -----  -----  -----
pool1                       10,3G  4,05T      0  6,93K      0   806M
  mirror                    1,95G   830G      0  1,13K      0   132M
    c1t50000398E801CBB6d0       -      -      0  1,06K      0   132M
    c2t50000398E801D14Ad0       -      -      0  1,06K      0   132M
  mirror                    1,90G   830G      0  1,04K      0   126M
    c3t50000398E801CE5Ed0       -      -      0  1,01K      0   126M
    c4t50000398E801CE46d0       -      -      0  1,01K      0   126M
  mirror                    2,07G   830G      0  1,10K      0   135M
    c5t50000398E801CE5Ad0       -      -      0  1,07K      0   135M
    c6t50000398E801CBBAd0       -      -      0  1,07K      0   135M
  mirror                    2,23G   830G      0  1,39K      0   172M
    c7t50000398E801CF62d0       -      -      0  1,36K      0   172M
    c8t50000398E801D0EEd0       -      -      0  1,36K      0   172M
  mirror                    2,17G   830G      0  1,44K      0   178M
    c10t50000398E801D136d0      -      -      0  1,41K      0   178M
    c9t50000398E801CEFAd0       -      -      0  1,41K      0   178M
logs                            -      -      -      -      -      -
  c11t1d0p1                  610M  9,40G      0    845      0  64,6M
cache                           -      -      -      -      -      -
  c11t1d0p2                 2,89G   203G      0    264      0  32,6M
--------------------------  -----  -----  -----  -----  -----  -----

                               capacity     operations    bandwidth
pool                        alloc   free   read  write   read  write
--------------------------  -----  -----  -----  -----  -----  -----
pool1                       10,3G  4,05T      0  4,07K      0   367M
  mirror                    1,95G   830G      0    391      0  48,6M
    c1t50000398E801CBB6d0       -      -      0    390      0  48,6M
    c2t50000398E801D14Ad0       -      -      0    390      0  48,6M
  mirror                    1,90G   830G      0    366      0  45,5M
    c3t50000398E801CE5Ed0       -      -      0    365      0  45,5M
    c4t50000398E801CE46d0       -      -      0    366      0  45,5M
  mirror                    2,07G   830G      0    400      0  49,6M
    c5t50000398E801CE5Ad0       -      -      0    398      0  49,6M
    c6t50000398E801CBBAd0       -      -      0    399      0  49,6M
  mirror                    2,23G   830G      0    522      0  65,0M
    c7t50000398E801CF62d0       -      -      0    522      0  65,0M
    c8t50000398E801D0EEd0       -      -      0    521      0  65,0M
  mirror                    2,17G   830G      0    509      0  63,4M
    c10t50000398E801D136d0      -      -      0    508      0  63,4M
    c9t50000398E801CEFAd0       -      -      0    508      0  63,4M
logs                            -      -      -      -      -      -
  c11t1d0p1                  610M  9,40G      0  1,94K      0  95,0M
cache                           -      -      -      -      -      -
  c11t1d0p2                 2,88G   203G      0    392      0  47,3M
--------------------------  -----  -----  -----  -----  -----  -----

and FIO result:
Code:
root@wg02:/mnt/nfs# fio fiobench-nfs.conf
iometer: (g=0): rw=randrw, bs=(R) 512B-64.0KiB, (W) 512B-64.0KiB, (T) 512B-64.0KiB, ioengine=libaio, iodepth=64
fio-3.1
Starting 1 process
Jobs: 1 (f=1): [m(1)][100.0%][r=136MiB/s,w=33.7MiB/s][r=31.7k,w=7823 IOPS][eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=12672: Sat Feb 23 10:24:33 2019
  Description  : [Emulation of Intel IOmeter File Server Access Pattern]
   read: IOPS=27.5k, BW=166MiB/s (174MB/s)(3283MiB/19810msec)
    slat (usec): min=7, max=2263, avg=12.54, stdev= 6.54
    clat (usec): min=92, max=1140.0k, avg=1821.48, stdev=11540.11
     lat (usec): min=107, max=1140.0k, avg=1835.91, stdev=11540.12
    clat percentiles (usec):
     |  1.00th=[  1467],  5.00th=[  1500], 10.00th=[  1516], 20.00th=[  1549],
     | 30.00th=[  1565], 40.00th=[  1582], 50.00th=[  1598], 60.00th=[  1614],
     | 70.00th=[  1631], 80.00th=[  1647], 90.00th=[  1680], 95.00th=[  1729],
     | 99.00th=[  1860], 99.50th=[  2008], 99.90th=[  6194], 99.95th=[ 36963],
     | 99.99th=[574620]
   bw (  KiB/s): min=  242, max=325477, per=100.00%, avg=170189.23, stdev=78616.05, samples=39
   iops        : min=   36, max=33120, avg=27457.90, stdev=9403.14, samples=39
  write: IOPS=6906, BW=41.1MiB/s (43.0MB/s)(813MiB/19810msec)
    slat (usec): min=7, max=529, avg=14.08, stdev= 5.78
    clat (usec): min=149, max=1140.4k, avg=1896.97, stdev=13880.68
     lat (usec): min=163, max=1140.4k, avg=1912.93, stdev=13880.67
    clat percentiles (usec):
     |  1.00th=[  1467],  5.00th=[  1500], 10.00th=[  1516], 20.00th=[  1549],
     | 30.00th=[  1565], 40.00th=[  1582], 50.00th=[  1598], 60.00th=[  1614],
     | 70.00th=[  1631], 80.00th=[  1663], 90.00th=[  1696], 95.00th=[  1729],
     | 99.00th=[  1860], 99.50th=[  2057], 99.90th=[ 14091], 99.95th=[100140],
     | 99.99th=[692061]
   bw (  KiB/s): min=  159, max=81244, per=100.00%, avg=42174.72, stdev=19085.69, samples=39
   iops        : min=   18, max= 8236, avg=6888.97, stdev=2342.10, samples=39
  lat (usec)   : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=99.46%, 4=0.38%, 10=0.04%, 20=0.02%, 50=0.03%
  lat (msec)   : 100=0.01%, 250=0.02%, 500=0.01%, 750=0.01%, 2000=0.01%
  cpu          : usr=33.40%, sys=53.31%, ctx=7679, majf=0, minf=11
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwt: total=545380,136820,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=166MiB/s (174MB/s), 166MiB/s-166MiB/s (174MB/s-174MB/s), io=3283MiB (3442MB), run=19810-19810msec
  WRITE: bw=41.1MiB/s (43.0MB/s), 41.1MiB/s-41.1MiB/s (43.0MB/s-43.0MB/s), io=813MiB (853MB), run=19810-19810msec

Disk stats (read/write):
  sdb: ios=540139/135518, merge=0/3, ticks=161292/53468, in_queue=96320, util=90.42%
 
.....and there is ISCSI, cache default (no cache), on storage plugin WriteCache enabled.

zpool iostat -v pool1 3
Code:
                               capacity     operations    bandwidth
pool                        alloc   free   read  write   read  write
--------------------------  -----  -----  -----  -----  -----  -----
pool1                       8,43G  4,05T      0  5,66K      0  55,5M
  mirror                    1,58G   830G      0    881      0  8,77M
    c1t50000398E801CBB6d0       -      -      0    100      0  8,77M
    c2t50000398E801D14Ad0       -      -      0     97      0  8,44M
  mirror                    1,55G   830G      0    805      0  8,13M
    c3t50000398E801CE5Ed0       -      -      0     91      0  7,98M
    c4t50000398E801CE46d0       -      -      0     94      0  8,13M
  mirror                    1,71G   830G      0    870      0  8,97M
    c5t50000398E801CE5Ad0       -      -      0    104      0  8,90M
    c6t50000398E801CBBAd0       -      -      0    104      0  8,90M
  mirror                    1,81G   830G      0    767      0  7,95M
    c7t50000398E801CF62d0       -      -      0     90      0  7,90M
    c8t50000398E801D0EEd0       -      -      0     88      0  7,94M
  mirror                    1,78G   830G      0    885      0  8,83M
    c10t50000398E801D136d0      -      -      0     92      0  8,68M
    c9t50000398E801CEFAd0       -      -      0     93      0  8,83M
logs                            -      -      -      -      -      -
  c11t1d0p1                  601M  9,41G      0  1,54K      0  12,9M
cache                           -      -      -      -      -      -
  c11t1d0p2                 5,35G   201G      0     35      0  2,98M
--------------------------  -----  -----  -----  -----  -----  -----

                               capacity     operations    bandwidth
pool                        alloc   free   read  write   read  write
--------------------------  -----  -----  -----  -----  -----  -----
pool1                       8,43G  4,05T      0  6,13K      0  60,9M
  mirror                    1,57G   830G      0    733      0  8,80M
    c1t50000398E801CBB6d0       -      -      0    104      0  8,80M
    c2t50000398E801D14Ad0       -      -      0    105      0  8,80M
  mirror                    1,55G   830G      0    900      0  9,53M
    c3t50000398E801CE5Ed0       -      -      0    107      0  9,53M
    c4t50000398E801CE46d0       -      -      0    112      0  9,53M
  mirror                    1,70G   830G      0    702      0  7,83M
    c5t50000398E801CE5Ad0       -      -      0     83      0  7,83M
    c6t50000398E801CBBAd0       -      -      0     84      0  7,83M
  mirror                    1,81G   830G      0    992      0  9,64M
    c7t50000398E801CF62d0       -      -      0     93      0  9,65M
    c8t50000398E801D0EEd0       -      -      0     90      0  9,65M
  mirror                    1,80G   830G      0  1,34K      0  12,2M
    c10t50000398E801D136d0      -      -      0    124      0  12,2M
    c9t50000398E801CEFAd0       -      -      0    122      0  12,2M
logs                            -      -      -      -      -      -
  c11t1d0p1                  596M  9,42G      0  1,54K      0  12,9M
cache                           -      -      -      -      -      -
  c11t1d0p2                 5,34G   201G      0     40      0  4,50M
--------------------------  -----  -----  -----  -----  -----  -----

                               capacity     operations    bandwidth
pool                        alloc   free   read  write   read  write
--------------------------  -----  -----  -----  -----  -----  -----
pool1                       8,43G  4,05T      0  1,28K      0  10,4M
  mirror                    1,57G   830G      0      0      0      0
    c1t50000398E801CBB6d0       -      -      0      0      0      0
    c2t50000398E801D14Ad0       -      -      0      0      0      0
  mirror                    1,55G   830G      0      0      0      0
    c3t50000398E801CE5Ed0       -      -      0      0      0      0
    c4t50000398E801CE46d0       -      -      0      0      0      0
  mirror                    1,70G   830G      0      0      0      0
    c5t50000398E801CE5Ad0       -      -      0      0      0      0
    c6t50000398E801CBBAd0       -      -      0      0      0      0
  mirror                    1,81G   830G      0      0      0      0
    c7t50000398E801CF62d0       -      -      0      0      0      0
    c8t50000398E801D0EEd0       -      -      0      0      0      0
  mirror                    1,80G   830G      0      0      0      0
    c10t50000398E801D136d0      -      -      0      0      0      0
    c9t50000398E801CEFAd0       -      -      0      0      0      0
logs                            -      -      -      -      -      -
  c11t1d0p1                  596M  9,42G      0  1,28K      0  10,4M
cache                           -      -      -      -      -      -
  c11t1d0p2                 5,32G   201G      0     45      0  5,50M
--------------------------  -----  -----  -----  -----  -----  -----

                               capacity     operations    bandwidth
pool                        alloc   free   read  write   read  write
--------------------------  -----  -----  -----  -----  -----  -----
pool1                       8,41G  4,05T      0  5,79K      0  57,6M
  mirror                    1,57G   830G      0    959      0  9,77M
    c1t50000398E801CBB6d0       -      -      0    113      0  9,77M
    c2t50000398E801D14Ad0       -      -      0    113      0  9,77M
  mirror                    1,55G   830G      0    800      0  8,54M
    c3t50000398E801CE5Ed0       -      -      0     95      0  8,55M
    c4t50000398E801CE46d0       -      -      0     94      0  8,55M
  mirror                    1,70G   830G      0    872      0  9,19M
    c5t50000398E801CE5Ad0       -      -      0    105      0  9,20M
    c6t50000398E801CBBAd0       -      -      0    100      0  9,20M
  mirror                    1,80G   830G      0    828      0  8,68M
    c7t50000398E801CF62d0       -      -      0    104      0  8,68M
    c8t50000398E801D0EEd0       -      -      0    105      0  8,68M
  mirror                    1,80G   830G      0    906      0  9,29M
    c10t50000398E801D136d0      -      -      0    107      0  9,29M
    c9t50000398E801CEFAd0       -      -      0    106      0  9,29M
logs                            -      -      -      -      -      -
  c11t1d0p1                  557M  9,46G      0  1,53K      0  12,2M
cache                           -      -      -      -      -      -
  c11t1d0p2                 5,28G   201G      0     14      0  1,65M
--------------------------  -----  -----  -----  -----  -----  -----

and FIO :

Code:
iometer: (g=0): rw=randrw, bs=(R) 512B-64.0KiB, (W) 512B-64.0KiB, (T) 512B-64.0KiB, ioengine=libaio, iodepth=64
fio-3.1
Starting 1 process
Jobs: 1 (f=1): [m(1)][100.0%][r=25.7MiB/s,w=6097KiB/s][r=6214,w=1543 IOPS][eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=12676: Sat Feb 23 10:30:34 2019
  Description  : [Emulation of Intel IOmeter File Server Access Pattern]
   read: IOPS=6513, BW=39.2MiB/s (41.1MB/s)(3283MiB/83729msec)
    slat (usec): min=7, max=7529, avg=15.22, stdev=12.70
    clat (usec): min=199, max=432535, avg=7620.56, stdev=19786.35
     lat (usec): min=220, max=432552, avg=7637.61, stdev=19786.32
    clat percentiles (msec):
     |  1.00th=[    3],  5.00th=[    4], 10.00th=[    4], 20.00th=[    4],
     | 30.00th=[    4], 40.00th=[    4], 50.00th=[    4], 60.00th=[    4],
     | 70.00th=[    4], 80.00th=[    4], 90.00th=[   13], 95.00th=[   31],
     | 99.00th=[   59], 99.50th=[  128], 99.90th=[  326], 99.95th=[  376],
     | 99.99th=[  414]
   bw (  KiB/s): min= 9560, max=79322, per=100.00%, avg=40186.34, stdev=13864.40, samples=167
   iops        : min= 2226, max=10300, avg=6515.39, stdev=1763.80, samples=167
  write: IOPS=1634, BW=9946KiB/s (10.2MB/s)(813MiB/83729msec)
    slat (usec): min=7, max=2207, avg=17.09, stdev=17.02
    clat (usec): min=325, max=432597, avg=8668.01, stdev=21657.72
     lat (usec): min=345, max=432615, avg=8686.96, stdev=21657.72
    clat percentiles (msec):
     |  1.00th=[    3],  5.00th=[    4], 10.00th=[    4], 20.00th=[    4],
     | 30.00th=[    4], 40.00th=[    4], 50.00th=[    4], 60.00th=[    4],
     | 70.00th=[    4], 80.00th=[    5], 90.00th=[   19], 95.00th=[   35],
     | 99.00th=[   68], 99.50th=[  142], 99.90th=[  342], 99.95th=[  384],
     | 99.99th=[  414]
   bw (  KiB/s): min= 2542, max=20795, per=100.00%, avg=9957.87, stdev=3395.68, samples=167
   iops        : min=  576, max= 2563, avg=1634.59, stdev=441.39, samples=167
  lat (usec)   : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.24%, 4=80.06%, 10=7.98%, 20=3.85%, 50=6.31%
  lat (msec)   : 100=0.92%, 250=0.42%, 500=0.19%
  cpu          : usr=9.07%, sys=18.04%, ctx=509196, majf=0, minf=11
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwt: total=545380,136820,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=39.2MiB/s (41.1MB/s), 39.2MiB/s-39.2MiB/s (41.1MB/s-41.1MB/s), io=3283MiB (3442MB), run=83729-83729msec
  WRITE: bw=9946KiB/s (10.2MB/s), 9946KiB/s-9946KiB/s (10.2MB/s-10.2MB/s), io=813MiB (853MB), run=83729-83729msec

Disk stats (read/write):
  sdc: ios=544372/136594, merge=0/16, ticks=4148000/1184668, in_queue=4782944, util=90.92%


Trying different cache mode a recordsize on iscsi, but still slow.......
 
Your storage box seems to be working perfectly. The only thing I can think of is network related since iSCSI performance is very picky when it comes to network configuration and especially MTU. What MTU are you using on the network handling iSCSI traffic?
I personally use infiniband so on my iSCSI network MTU is 65520 and since your network is Ethernet based I would suggest you activate jumbo frames on the iSCSI network.
NFS: Created to be used over Ethernet so frame size is unimportant since all file operations occur at the file server
iSCSI: Created to be used over FC so frame size matters since all file operations occur at the client.
 
yes, Writeback cache is enabled:
Code:
root@san1-vos:~#  stmfadm list-lu -v
LU Name: 600144F6450E380B9B6D80F4BC4EE0DF
    Operational Status: Online
    Provider Name     : sbd
    Alias             : /dev/zvol/rdsk/pool1/vm-405-disk-0
    View Entry Count  : 1
    Data File         : /dev/zvol/rdsk/pool1/vm-405-disk-0
    Meta File         : not set
    Size              : 26843545600
    Block Size        : 512
    Management URL    : not set
    Vendor ID         : SUN     
    Product ID        : COMSTAR         
    Serial Num        : not set
    Write Protect     : Disabled
    Writeback Cache   : Enabled
    Access State      : Active
LU Name: 600144F81834E26B479D7C414BE87077
    Operational Status: Online
    Provider Name     : sbd
    Alias             : /dev/zvol/rdsk/pool1/vm-104-disk-0
    View Entry Count  : 1
    Data File         : /dev/zvol/rdsk/pool1/vm-104-disk-0
    Meta File         : not set
    Size              : 16106127360
    Block Size        : 512
    Management URL    : not set
    Vendor ID         : SUN     
    Product ID        : COMSTAR         
    Serial Num        : not set
    Write Protect     : Disabled
    Writeback Cache   : Enabled
    Access State      : Active
root@san1-vos:~#


MTU is 1500. After change to 9000 ( Proxmox node, switch, Omnios), NFS is maybe little better, but ISCSI little worse.....

On Monday i try disconnect switch ( D-Link DXS-1210-10ts) and test direct connection between Proxmox node and Omnios.
 
With direct connected Proxmox<->Onmios, result is little better, but still huge difference between NFS and ISCSI.
I will downloading and test Omnios LTS version.
 
hmmm.... on LTS version ( OmniOS 5.11 omnios-r151022-5e982daae6) my HBA not work.... No driver?
Code:
Scanpci (search HBA, skip graphic and Intel)
======================================================================


pci bus  0x0002 cardnum 0x00 function 0x00: vendor 0x1000 device 0x00c4
 LSI Logic / Symbios Logic SAS3224 PCI-Express Fusion-MPT SAS-3
 CardVendor 0x1000 card 0x3190 (LSI Logic / Symbios Logic, Card unknown)
  STATUS    0x0010  COMMAND 0x0007
  CLASS     0x01 0x07 0x00  REVISION 0x01
  BIST      0x00  HEADER 0x00  LATENCY 0x00  CACHE 0x10
  BASE0     0x0000e000 SIZE 256  I/O
  BASE1     0xfb200000 SIZE 65536  MEM
  BASEROM   0x00000000  addr 0x00000000
  MAX_LAT   0x00  MIN_GNT 0x00  INT_PIN 0x01  INT_LINE 0x0b
Downloading Omnios Bloody.....
 
hi Mir,
thanks for your interest and request about my HBA.

Now I have installed Omnios Bloody - results is similar, NFS ok, Iscsi still slow.
So serching info about iscsi network tunning...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!