iSCSI SAN Presented as NFS Using FreeNAS

Chris.P

New Member
Mar 25, 2016
5
0
1
Wisconsin, USA
I'm the Systems/Infrastructure Manager for a medium size software consulting/development co. and have been using Proxmox for several years successfully to host Win/Nix VMs in our Development environment. 4 hosts backed by 2 FreeNAS servers ZFS raid 10 presented to Proxmox hosts with NFS.

Our production environment consists of 3 VMWare esxi hosts backed by an Equalogic iSCSI SAN. I really want to move off of VMWare to Proxmox in production however I really need snapshots & QCOW2 which iSCSI/LVM doesn't support.

I've done some experimentation proof of concept in a test environment by installing the iSCSI initiator in FreeNAS, then mapping to the iSCSI LUN so that FreeNAS sees it as a local drive. From there I formatted the drive ZFS and shared it to Proxmox via NFS. I even created 2 lightweight VM's in Proxmox, booted and even live migrated them. FreeNAS essentially acting as an NFS gateway.

While this proof of concept works fine in my test environment (all virtual) has anyone else out there experimented with this? I know many of you are cringing at the thought, however I'm really trying to brainstorm on ways to mitigate the iSCSI/LVM limitations within Proxmox..

Thoughts and ideas?
 
You could try replacing your FreeNAS box with a Solaris box - I recommend Omnios. This gives you the full ZFS feature set through Comstar iSCSI - snapshots, (linked) clones. Disk format is raw which gives a lot more iops than Qcow2.
 
  • Like
Reactions: Chris.P
You could try replacing your FreeNAS box with a Solaris box - I recommend Omnios. This gives you the full ZFS feature set through Comstar iSCSI - snapshots, (linked) clones. Disk format is raw which gives a lot more iops than Qcow2.
Thanks mir,
That's a good recommendation. I'll do some more research into Omnios/Comstar iSCSI. I'm really wondering what kind of iop performance could be attained from adding this extra layer in between Proxmox and the storage? Thanks for the feedback!

Pro's:
- ZFS feature set
- snapshots
- (linked) clones
- raw (better iops)
Con's:
- only raw, no thin provisioned qcow2 vhd's
 
I'm really wondering what kind of iop performance could be attained from adding this extra layer in between Proxmox and the storage?
Storage server: RAID10 (2xmirrored vdev)
Running fio inside a VM.

/dev/sdb1 on /media/disk type ext4 (rw,relatime,data=ordered)
iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, iodepth=64
fio-2.1.11
Starting 1 process
Jobs: 1 (f=1): [m(1)] [100.0% done] [11087KB/2502KB/0KB /s] [2503/557/0 iops] [eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=756: Fri Mar 25 20:33:17 2016
Description : [Emulation of Intel IOmeter File Server Access Pattern]
read : io=3274.5MB, bw=18597KB/s, iops=3051, runt=180281msec
slat (usec): min=5, max=47073, avg=17.75, stdev=92.49
clat (usec): min=54, max=2988.6K, avg=16927.78, stdev=68909.62
lat (usec): min=186, max=2988.6K, avg=16945.94, stdev=68910.00
clat percentiles (usec):
| 1.00th=[ 239], 5.00th=[ 290], 10.00th=[ 330], 20.00th=[ 410],
| 30.00th=[ 532], 40.00th=[ 772], 50.00th=[ 1128], 60.00th=[ 1848],
| 70.00th=[ 3280], 80.00th=[ 6688], 90.00th=[20608], 95.00th=[94720],
| 99.00th=[325632], 99.50th=[452608], 99.90th=[782336], 99.95th=[954368],
| 99.99th=[1744896]
bw (KB /s): min= 749, max=55661, per=100.00%, avg=18761.82, stdev=6327.63
write: io=841688KB, bw=4668.8KB/s, iops=763, runt=180281msec
slat (usec): min=7, max=460541, avg=24.44, stdev=1253.71
clat (usec): min=71, max=2096.9K, avg=16031.26, stdev=64257.99
lat (usec): min=206, max=2096.9K, avg=16056.17, stdev=64269.84
clat percentiles (usec):
| 1.00th=[ 262], 5.00th=[ 318], 10.00th=[ 366], 20.00th=[ 446],
| 30.00th=[ 548], 40.00th=[ 756], 50.00th=[ 1096], 60.00th=[ 1736],
| 70.00th=[ 3152], 80.00th=[ 6624], 90.00th=[20864], 95.00th=[87552],
| 99.00th=[305152], 99.50th=[436224], 99.90th=[749568], 99.95th=[905216],
| 99.99th=[1449984]
bw (KB /s): min= 198, max=13990, per=100.00%, avg=4710.94, stdev=1673.18
lat (usec) : 100=0.01%, 250=1.36%, 500=26.24%, 750=11.80%, 1000=7.69%
lat (msec) : 2=14.42%, 4=11.62%, 10=11.67%, 20=5.04%, 50=3.12%
lat (msec) : 100=2.33%, 250=3.18%, 500=1.16%, 750=0.27%, 1000=0.07%
lat (msec) : 2000=0.04%, >=2000=0.01%
cpu : usr=3.45%, sys=12.94%, ctx=532301, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=550156/w=137644/d=0, short=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: io=3274.5MB, aggrb=18596KB/s, minb=18596KB/s, maxb=18596KB/s, mint=180281msec, maxt=180281msec
WRITE: io=841688KB, aggrb=4668KB/s, minb=4668KB/s, maxb=4668KB/s, mint=180281msec, maxt=180281msec

Disk stats (read/write):
sdb: ios=549723/137605, merge=1/36, ticks=9265908/2204204, in_queue=11639448, util=100.00%

/dev/sdb1 on /media/disk type ext4 (rw,relatime,nobarrier,data=ordered)
iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, iodepth=64
fio-2.1.11
Starting 1 process
iometer: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [m(1)] [100.0% done] [46964KB/11390KB/0KB /s] [11.4K/2673/0 iops] [eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=775: Fri Mar 25 20:39:44 2016
Description : [Emulation of Intel IOmeter File Server Access Pattern]
read : io=3274.5MB, bw=30527KB/s, iops=5009, runt=109826msec
slat (usec): min=3, max=10866, avg=15.29, stdev=48.77
clat (usec): min=1, max=4693.4K, avg=10057.18, stdev=43073.08
lat (usec): min=188, max=4693.5K, avg=10072.88, stdev=43073.53
clat percentiles (usec):
| 1.00th=[ 258], 5.00th=[ 318], 10.00th=[ 370], 20.00th=[ 470],
| 30.00th=[ 612], 40.00th=[ 860], 50.00th=[ 1384], 60.00th=[ 2192],
| 70.00th=[ 2928], 80.00th=[ 3632], 90.00th=[15680], 95.00th=[45312],
| 99.00th=[177152], 99.50th=[284672], 99.90th=[577536], 99.95th=[733184],
| 99.99th=[946176]
bw (KB /s): min= 180, max=71983, per=100.00%, avg=30940.25, stdev=15342.83
write: io=841688KB, bw=7663.9KB/s, iops=1253, runt=109826msec
slat (usec): min=4, max=39061, avg=18.22, stdev=114.55
clat (usec): min=1, max=4887.4K, avg=10772.17, stdev=50725.49
lat (usec): min=209, max=4887.5K, avg=10790.82, stdev=50725.96
clat percentiles (usec):
| 1.00th=[ 278], 5.00th=[ 346], 10.00th=[ 402], 20.00th=[ 510],
| 30.00th=[ 660], 40.00th=[ 916], 50.00th=[ 1464], 60.00th=[ 2352],
| 70.00th=[ 3056], 80.00th=[ 3824], 90.00th=[15808], 95.00th=[50432],
| 99.00th=[185344], 99.50th=[292864], 99.90th=[626688], 99.95th=[815104],
| 99.99th=[1482752]
bw (KB /s): min= 4, max=19182, per=100.00%, avg=7843.00, stdev=3809.88
lat (usec) : 2=0.01%, 4=0.01%, 50=0.01%, 100=0.01%, 250=0.68%
lat (usec) : 500=21.03%, 750=14.17%, 1000=7.29%
lat (msec) : 2=14.25%, 4=24.38%, 10=5.62%, 20=3.93%, 50=3.93%
lat (msec) : 100=2.42%, 250=1.67%, 500=0.47%, 750=0.10%, 1000=0.04%
lat (msec) : 2000=0.01%, >=2000=0.01%
cpu : usr=5.23%, sys=18.45%, ctx=504187, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=550156/w=137644/d=0, short=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: io=3274.5MB, aggrb=30526KB/s, minb=30526KB/s, maxb=30526KB/s, mint=109826msec, maxt=109826msec
WRITE: io=841688KB, aggrb=7663KB/s, minb=7663KB/s, maxb=7663KB/s, mint=109826msec, maxt=109826msec

Disk stats (read/write):
sdb: ios=550055/137665, merge=13/24, ticks=5510488/1558668, in_queue=7071272, util=100.00%
 
Just tried using XFS as filesystem inside the VM.

/dev/sdb1 on /media/disk type xfs (rw,relatime,attr2,inode64,noquota)
iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, iodepth=64
fio-2.1.11
Starting 1 process
iometer: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [m(1)] [100.0% done] [25225KB/5864KB/0KB /s] [5771/1361/0 iops] [eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=11519: Fri Mar 25 21:28:41 2016
Description : [Emulation of Intel IOmeter File Server Access Pattern]
read : io=3274.5MB, bw=38832KB/s, iops=6372, runt= 86337msec
slat (usec): min=5, max=2672, avg= 9.40, stdev=11.38
clat (msec): min=1, max=429, avg= 7.91, stdev= 8.02
lat (msec): min=1, max=429, avg= 7.92, stdev= 8.03
clat percentiles (msec):
| 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 6],
| 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 8],
| 70.00th=[ 8], 80.00th=[ 9], 90.00th=[ 10], 95.00th=[ 14],
| 99.00th=[ 30], 99.50th=[ 42], 99.90th=[ 90], 99.95th=[ 157],
| 99.99th=[ 429]
bw (KB /s): min=13007, max=69266, per=100.00%, avg=38932.33, stdev=11154.64
write: io=841688KB, bw=9748.9KB/s, iops=1594, runt= 86337msec
slat (usec): min=6, max=422714, avg=573.73, stdev=1850.97
clat (msec): min=1, max=428, avg= 7.90, stdev= 7.57
lat (msec): min=1, max=429, avg= 8.47, stdev= 7.96
clat percentiles (msec):
| 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 6],
| 30.00th=[ 6], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 8],
| 70.00th=[ 8], 80.00th=[ 9], 90.00th=[ 10], 95.00th=[ 14],
| 99.00th=[ 31], 99.50th=[ 45], 99.90th=[ 95], 99.95th=[ 153],
| 99.99th=[ 231]
bw (KB /s): min= 3008, max=17045, per=100.00%, avg=9775.23, stdev=2815.89
lat (msec) : 2=0.01%, 4=1.32%, 10=89.96%, 20=5.71%, 50=2.63%
lat (msec) : 100=0.29%, 250=0.08%, 500=0.01%
cpu : usr=3.53%, sys=11.79%, ctx=120256, majf=0, minf=8
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=550156/w=137644/d=0, short=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: io=3274.5MB, aggrb=38831KB/s, minb=38831KB/s, maxb=38831KB/s, mint=86337msec, maxt=86337msec
WRITE: io=841688KB, aggrb=9748KB/s, minb=9748KB/s, maxb=9748KB/s, mint=86337msec, maxt=86337msec

Disk stats (read/write):
sdb: ios=549888/137583, merge=0/3, ticks=375584/77740, in_queue=452892, util=94.83%
 
Hello Mir,
Which command did you to do the testing? ( I could not find iometer in debian packages. )
Code:
# This job file tries to mimic the Intel IOMeter File Server Access Pattern
[global]
description=Emulation of Intel IOmeter File Server Access Pattern

[iometer]
bssplit=512/10:1k/5:2k/5:4k/60:8k/2:16k/4:32k/4:64k/10
rw=randrw
rwmixread=80
direct=1
size=4g
ioengine=libaio
# IOMeter defines the server loads as the following:
# iodepth=1    Linear
# iodepth=4    Very Light
# iodepth=8    Light
# iodepth=64    Moderate
# iodepth=256    Heavy
iodepth=64
 
fio tests using Mir's fio config [ see above ]
hardware: omnios napp-it running on supermicro x9scl-f . 28GB memory. lsi SAS2008 IT mode HBA
zfs: raidz1 : 5 intel ssd pro series 2500 480GB + zil intel ssd s3700

Lxc :
Code:
iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, iodepth=64
fio-2.1.11
Starting 1 process
iometer: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [m(1)] [100.0% done] [252.7MB/65052KB/0KB /s] [59.3K/14.8K/0 iops] [eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=2513: Sat Apr  9 15:24:14 2016
  Description  : [Emulation of Intel IOmeter File Server Access Pattern]
  read : io=3274.5MB, bw=303130KB/s, iops=49742, runt= 11060msec
  slat (usec): min=2, max=588, avg= 7.45, stdev=18.05
  clat (usec): min=160, max=252974, avg=885.66, stdev=2765.50
  lat (usec): min=167, max=252977, avg=893.38, stdev=2765.43
  clat percentiles (usec):
  |  1.00th=[  334],  5.00th=[  406], 10.00th=[  482], 20.00th=[  580],
  | 30.00th=[  652], 40.00th=[  716], 50.00th=[  772], 60.00th=[  836],
  | 70.00th=[  892], 80.00th=[  980], 90.00th=[ 1128], 95.00th=[ 1304],
  | 99.00th=[ 1832], 99.50th=[ 2320], 99.90th=[20608], 99.95th=[38656],
  | 99.99th=[136192]
  bw (KB  /s): min=118739, max=444443, per=100.00%, avg=305665.86, stdev=75220.28
  write: io=841688KB, bw=76102KB/s, iops=12445, runt= 11060msec
  slat (usec): min=3, max=1020, avg= 9.17, stdev=20.35
  clat (usec): min=395, max=264907, avg=1552.83, stdev=5218.00
  lat (usec): min=402, max=264915, avg=1562.29, stdev=5217.88
  clat percentiles (usec):
  |  1.00th=[  644],  5.00th=[  804], 10.00th=[  908], 20.00th=[ 1032],
  | 30.00th=[ 1128], 40.00th=[ 1208], 50.00th=[ 1288], 60.00th=[ 1384],
  | 70.00th=[ 1480], 80.00th=[ 1624], 90.00th=[ 1864], 95.00th=[ 2128],
  | 99.00th=[ 2960], 99.50th=[ 4320], 99.90th=[61696], 99.95th=[121344],
  | 99.99th=[252928]
  bw (KB  /s): min=28575, max=111024, per=100.00%, avg=76804.24, stdev=18922.29
  lat (usec) : 250=0.01%, 500=9.42%, 750=27.66%, 1000=31.51%
  lat (msec) : 2=29.46%, 4=1.58%, 10=0.12%, 20=0.06%, 50=0.12%
  lat (msec) : 100=0.02%, 250=0.03%, 500=0.01%
  cpu  : usr=12.19%, sys=56.86%, ctx=17722, majf=0, minf=8
  IO depths  : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
  submit  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
  complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
  issued  : total=r=550156/w=137644/d=0, short=r=0/w=0/d=0
  latency  : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  READ: io=3274.5MB, aggrb=303130KB/s, minb=303130KB/s, maxb=303130KB/s, mint=11060msec, maxt=11060msec
  WRITE: io=841688KB, aggrb=76101KB/s, minb=76101KB/s, maxb=76101KB/s, mint=11060msec, maxt=11060msec

Disk stats (read/write):
  dm-2: ios=543761/136160, merge=0/0, ticks=349496/178408, in_queue=527996, util=99.14%, aggrios=547260/137478, aggrmerge=2955/219, aggrticks=351272/179892, aggrin_queue=530992, aggrutil=98.92%
  sdk: ios=547260/137478, merge=2955/219, ticks=351272/179892, in_queue=530992, util=98.92%

kvm jessie:
/dev/sda1 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)

Code:
iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, iodepth=64
fio-2.1.11
Starting 1 process
iometer: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [m(1)] [100.0% done] [13212KB/3056KB/0KB /s] [3031/705/0 iops] [eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=1379: Sat Apr  9 15:35:36 2016
  Description  : [Emulation of Intel IOmeter File Server Access Pattern]
  read : io=3274.5MB, bw=13477KB/s, iops=2211, runt=248763msec
  slat (usec): min=1, max=245460, avg=14.67, stdev=589.03
  clat (usec): min=230, max=354068, avg=23064.99, stdev=17297.36
  lat (usec): min=641, max=354079, avg=23079.98, stdev=17304.66
  clat percentiles (usec):
  |  1.00th=[ 1128],  5.00th=[ 2800], 10.00th=[ 4960], 20.00th=[ 9152],
  | 30.00th=[13504], 40.00th=[17792], 50.00th=[22144], 60.00th=[26496],
  | 70.00th=[30848], 80.00th=[35072], 90.00th=[40192], 95.00th=[43264],
  | 99.00th=[55040], 99.50th=[64256], 99.90th=[254976], 99.95th=[268288],
  | 99.99th=[280576]
  bw (KB  /s): min= 5063, max=24580, per=100.00%, avg=13489.59, stdev=3681.45
  write: io=841688KB, bw=3383.6KB/s, iops=553, runt=248763msec
  slat (usec): min=3, max=255205, avg=35.01, stdev=1964.99
  clat (usec): min=903, max=346672, avg=23364.47, stdev=17357.57
  lat (usec): min=921, max=346689, avg=23399.84, stdev=17494.80
  clat percentiles (usec):
  |  1.00th=[ 1416],  5.00th=[ 3088], 10.00th=[ 5216], 20.00th=[ 9408],
  | 30.00th=[13760], 40.00th=[18048], 50.00th=[22400], 60.00th=[26752],
  | 70.00th=[31104], 80.00th=[35584], 90.00th=[40192], 95.00th=[43776],
  | 99.00th=[56064], 99.50th=[65280], 99.90th=[254976], 99.95th=[264192],
  | 99.99th=[284672]
  bw (KB  /s): min= 1075, max= 6659, per=100.00%, avg=3386.88, stdev=972.88
  lat (usec) : 250=0.01%, 500=0.01%, 750=0.05%, 1000=0.49%
  lat (msec) : 2=2.42%, 4=4.71%, 10=14.07%, 20=23.26%, 50=53.43%
  lat (msec) : 100=1.32%, 250=0.13%, 500=0.12%
  cpu  : usr=10.42%, sys=23.97%, ctx=673539, majf=0, minf=8
  IO depths  : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
  submit  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
  complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
  issued  : total=r=550156/w=137644/d=0, short=r=0/w=0/d=0
  latency  : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  READ: io=3274.5MB, aggrb=13477KB/s, minb=13477KB/s, maxb=13477KB/s, mint=248763msec, maxt=248763msec
  WRITE: io=841688KB, aggrb=3383KB/s, minb=3383KB/s, maxb=3383KB/s, mint=248763msec, maxt=248763msec

Disk stats (read/write):
  sda: ios=539298/137278, merge=11458/836, ticks=12272328/3311076, in_queue=15582852, util=100.00%
 
Last edited:
You should try adding mount option nobarrier.

Was is your cache setting for the disk exposed to this KVM in proxmox?

cache is set to writeback .

regarding mount option : I have this in fstab:
rw,relatime,nobarrier,data=ordered errors=remount-ro 0 1

however mount | grep sda :
/dev/sda1 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)

I do not know why 'nobarrier' is showing in the mount command. I'll check this tomorrow. Can you check at your system?

Also: how did the lxc vs kvm test look? I'm just learning fio so do not know how to evaluate the results.
 
mount |grep sda6
/dev/sda6 on / type ext4 (rw,relatime,nobarrier,data=ordered)
/etc/fstab
UUID=21ae3af6-9327-45b9-b7aa-13eb2a27c771 / ext4 nobarrier,defaults 0 2

lxc looks excellent but kvm is a little disappointing. The reason for the bad kvm performance is caused by having nobarrier and using cache = writeback. If disks is running on top of ZFS you get best performance by using cache = nocache (the default).
 
here is result with cache = nocache .
Code:
iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, iodepth=64
fio-2.1.11
Starting 1 process
Jobs: 1 (f=1): [m(1)] [100.0% done] [13518KB/3106KB/0KB /s] [3080/712/0 iops] [eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=741: Sat Apr  9 18:34:14 2016
  Description  : [Emulation of Intel IOmeter File Server Access Pattern]
  read : io=3274.5MB, bw=13636KB/s, iops=2237, runt=245858msec
  slat (usec): min=1, max=33704, avg=10.47, stdev=130.91
  clat (usec): min=103, max=303853, avg=22786.27, stdev=14097.24
  lat (usec): min=664, max=303862, avg=22797.06, stdev=14097.04
  clat percentiles (usec):
  |  1.00th=[ 1128],  5.00th=[ 2800], 10.00th=[ 4960], 20.00th=[ 9280],
  | 30.00th=[13504], 40.00th=[17792], 50.00th=[22144], 60.00th=[26752],
  | 70.00th=[31104], 80.00th=[35584], 90.00th=[40192], 95.00th=[43776],
  | 99.00th=[53504], 99.50th=[60160], 99.90th=[83456], 99.95th=[100864],
  | 99.99th=[252928]
  bw (KB  /s): min= 8046, max=26117, per=100.00%, avg=13642.08, stdev=3431.34
  write: io=841688KB, bw=3423.5KB/s, iops=559, runt=245858msec
  slat (usec): min=3, max=33451, avg=19.57, stdev=452.84
  clat (usec): min=899, max=303705, avg=23155.09, stdev=14109.95
  lat (usec): min=909, max=303718, avg=23174.99, stdev=14116.26
  clat percentiles (usec):
  |  1.00th=[ 1416],  5.00th=[ 3120], 10.00th=[ 5344], 20.00th=[ 9664],
  | 30.00th=[14016], 40.00th=[18304], 50.00th=[22656], 60.00th=[27008],
  | 70.00th=[31360], 80.00th=[35584], 90.00th=[40704], 95.00th=[43776],
  | 99.00th=[54016], 99.50th=[60672], 99.90th=[83456], 99.95th=[102912],
  | 99.99th=[257024]
  bw (KB  /s): min= 1717, max= 7110, per=100.00%, avg=3424.75, stdev=907.68
  lat (usec) : 250=0.01%, 500=0.01%, 750=0.05%, 1000=0.49%
  lat (msec) : 2=2.43%, 4=4.65%, 10=13.93%, 20=23.17%, 50=53.74%
  lat (msec) : 100=1.48%, 250=0.04%, 500=0.01%
  cpu  : usr=11.03%, sys=23.08%, ctx=673492, majf=0, minf=9
  IO depths  : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
  submit  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
  complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
  issued  : total=r=550156/w=137644/d=0, short=r=0/w=0/d=0
  latency  : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  READ: io=3274.5MB, aggrb=13636KB/s, minb=13636KB/s, maxb=13636KB/s, mint=245858msec, maxt=245858msec
  WRITE: io=841688KB, aggrb=3423KB/s, minb=3423KB/s, maxb=3423KB/s, mint=245858msec, maxt=245858msec

Disk stats (read/write):
  sda: ios=539347/137164, merge=11335/717, ticks=12247048/3245308, in_queue=15492124, util=100.00%


PS: note that nobarrier still does not show
mount|grep sda
/dev/sda1 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)
 
commented lines excluded:
Code:
UUID=016e4c68-b1e3-4275-b4db-e010f5c5650f / rw,relatime,nobarrier,data=ordered errors=remount-ro 0  1
tmpfs  /var/cache/apt/archives  tmpfs size=1G,defaults,noexec,nosuid,nodev,mode=0755 0 0
 
Your fstab line is wrong it should be:
UUID=016e4c68-b1e3-4275-b4db-e010f5c5650f / ext4 rw,relatime,nobarrier,data=ordered errors=remount-ro 0 1
 
thanks for catching that.

Code:
# mount|grep sda
/dev/sda1 on / type ext4 (rw,relatime,nobarrier,data=ordered)
and fio test:
Code:
iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, iodepth=64
fio-2.1.11
Starting 1 process
Jobs: 1 (f=1): [m(1)] [100.0% done] [13251KB/3173KB/0KB /s] [3093/735/0 iops] [eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=695: Sat Apr  9 19:12:16 2016  
  Description  : [Emulation of Intel IOmeter File Server Access Pattern]  
  read : io=3274.5MB, bw=13694KB/s, iops=2247, runt=244825msec  
  slat (usec): min=1, max=34337, avg=10.52, stdev=160.01
  clat (usec): min=96, max=485692, avg=22701.29, stdev=14578.86
  lat (usec): min=660, max=485701, avg=22712.10, stdev=14578.48
  clat percentiles (usec):
  |  1.00th=[ 1112],  5.00th=[ 2800], 10.00th=[ 4960], 20.00th=[ 9152],
  | 30.00th=[13504], 40.00th=[17792], 50.00th=[22144], 60.00th=[26496],
  | 70.00th=[30848], 80.00th=[35072], 90.00th=[40192], 95.00th=[43264],
  | 99.00th=[52480], 99.50th=[60160], 99.90th=[85504], 99.95th=[113152],
  | 99.99th=[296960]
  bw (KB  /s): min= 5004, max=25066, per=100.00%, avg=13713.01, stdev=3454.14
  write: io=841688KB, bw=3437.1KB/s, iops=562, runt=244825msec
  slat (usec): min=3, max=32123, avg=17.92, stdev=409.82
  clat (usec): min=868, max=483835, avg=23016.88, stdev=14330.54
  lat (usec): min=881, max=483848, avg=23035.12, stdev=14335.88
  clat percentiles (usec):
  |  1.00th=[ 1416],  5.00th=[ 3056], 10.00th=[ 5216], 20.00th=[ 9536],
  | 30.00th=[13888], 40.00th=[18304], 50.00th=[22400], 60.00th=[26752],
  | 70.00th=[31104], 80.00th=[35584], 90.00th=[40192], 95.00th=[43776],
  | 99.00th=[53504], 99.50th=[61184], 99.90th=[85504], 99.95th=[98816],
  | 99.99th=[264192]
  bw (KB  /s): min= 1422, max= 6579, per=100.00%, avg=3443.28, stdev=926.27
  lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.04%, 1000=0.51%
  lat (msec) : 2=2.47%, 4=4.68%, 10=13.98%, 20=23.24%, 50=53.71%
  lat (msec) : 100=1.32%, 250=0.04%, 500=0.02%
  cpu  : usr=10.73%, sys=23.07%, ctx=673631, majf=0, minf=8
  IO depths  : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
  submit  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
  complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
  issued  : total=r=550156/w=137644/d=0, short=r=0/w=0/d=0
  latency  : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  READ: io=3274.5MB, aggrb=13693KB/s, minb=13693KB/s, maxb=13693KB/s, mint=244825msec, maxt=244825msec
  WRITE: io=841688KB, aggrb=3437KB/s, minb=3437KB/s, maxb=3437KB/s, mint=244825msec, maxt=244825msec

Disk stats (read/write):
  sda: ios=539489/137206, merge=11367/712, ticks=12209328/3240400, in_queue=15451416, util=100.00%

PS: still having lvm errors . those may be interfering with test.
on your pve system do lvm commands like these work with out error? pvs and lvs give a lot of errors output.

https://pve.proxmox.com/wiki/Iscsi/tests
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!