iSCSI SAN Presented as NFS Using FreeNAS

Discussion in 'Proxmox VE: Installation and configuration' started by Chris.P, Mar 25, 2016.

Tags:
  1. Chris.P

    Chris.P New Member

    Joined:
    Mar 25, 2016
    Messages:
    5
    Likes Received:
    0
    I'm the Systems/Infrastructure Manager for a medium size software consulting/development co. and have been using Proxmox for several years successfully to host Win/Nix VMs in our Development environment. 4 hosts backed by 2 FreeNAS servers ZFS raid 10 presented to Proxmox hosts with NFS.

    Our production environment consists of 3 VMWare esxi hosts backed by an Equalogic iSCSI SAN. I really want to move off of VMWare to Proxmox in production however I really need snapshots & QCOW2 which iSCSI/LVM doesn't support.

    I've done some experimentation proof of concept in a test environment by installing the iSCSI initiator in FreeNAS, then mapping to the iSCSI LUN so that FreeNAS sees it as a local drive. From there I formatted the drive ZFS and shared it to Proxmox via NFS. I even created 2 lightweight VM's in Proxmox, booted and even live migrated them. FreeNAS essentially acting as an NFS gateway.

    While this proof of concept works fine in my test environment (all virtual) has anyone else out there experimented with this? I know many of you are cringing at the thought, however I'm really trying to brainstorm on ways to mitigate the iSCSI/LVM limitations within Proxmox..

    Thoughts and ideas?
     
  2. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,480
    Likes Received:
    96
    You could try replacing your FreeNAS box with a Solaris box - I recommend Omnios. This gives you the full ZFS feature set through Comstar iSCSI - snapshots, (linked) clones. Disk format is raw which gives a lot more iops than Qcow2.
     
    Chris.P likes this.
  3. Chris.P

    Chris.P New Member

    Joined:
    Mar 25, 2016
    Messages:
    5
    Likes Received:
    0
    Thanks mir,
    That's a good recommendation. I'll do some more research into Omnios/Comstar iSCSI. I'm really wondering what kind of iop performance could be attained from adding this extra layer in between Proxmox and the storage? Thanks for the feedback!

    Pro's:
    - ZFS feature set
    - snapshots
    - (linked) clones
    - raw (better iops)
    Con's:
    - only raw, no thin provisioned qcow2 vhd's
     
  4. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,480
    Likes Received:
    96
    When you create your storage in Proxmox you can configure it to create thin provisioned
    zvols.
     
  5. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,480
    Likes Received:
    96
    Storage server: RAID10 (2xmirrored vdev)
    Running fio inside a VM.

    /dev/sdb1 on /media/disk type ext4 (rw,relatime,data=ordered)
    /dev/sdb1 on /media/disk type ext4 (rw,relatime,nobarrier,data=ordered)
     
  6. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,480
    Likes Received:
    96
    Just tried using XFS as filesystem inside the VM.

    /dev/sdb1 on /media/disk type xfs (rw,relatime,attr2,inode64,noquota)
     
  7. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,488
    Likes Received:
    21
    Hello Mir,
    Which command did you to do the testing? ( I could not find iometer in debian packages. )
     
  8. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,480
    Likes Received:
    96
    Code:
    # This job file tries to mimic the Intel IOMeter File Server Access Pattern
    [global]
    description=Emulation of Intel IOmeter File Server Access Pattern
    
    [iometer]
    bssplit=512/10:1k/5:2k/5:4k/60:8k/2:16k/4:32k/4:64k/10
    rw=randrw
    rwmixread=80
    direct=1
    size=4g
    ioengine=libaio
    # IOMeter defines the server loads as the following:
    # iodepth=1    Linear
    # iodepth=4    Very Light
    # iodepth=8    Light
    # iodepth=64    Moderate
    # iodepth=256    Heavy
    iodepth=64
     
  9. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,488
    Likes Received:
    21
  10. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,480
    Likes Received:
    96
    The config file I presented is to be used with fio.

    Since iometer is for windows I have never used on my servers.
     
  11. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,488
    Likes Received:
    21
    fio tests using Mir's fio config [ see above ]
    hardware: omnios napp-it running on supermicro x9scl-f . 28GB memory. lsi SAS2008 IT mode HBA
    zfs: raidz1 : 5 intel ssd pro series 2500 480GB + zil intel ssd s3700

    Lxc :
    Code:
    iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, iodepth=64
    fio-2.1.11
    Starting 1 process
    iometer: Laying out IO file(s) (1 file(s) / 4096MB)
    Jobs: 1 (f=1): [m(1)] [100.0% done] [252.7MB/65052KB/0KB /s] [59.3K/14.8K/0 iops] [eta 00m:00s]
    iometer: (groupid=0, jobs=1): err= 0: pid=2513: Sat Apr  9 15:24:14 2016
      Description  : [Emulation of Intel IOmeter File Server Access Pattern]
      read : io=3274.5MB, bw=303130KB/s, iops=49742, runt= 11060msec
      slat (usec): min=2, max=588, avg= 7.45, stdev=18.05
      clat (usec): min=160, max=252974, avg=885.66, stdev=2765.50
      lat (usec): min=167, max=252977, avg=893.38, stdev=2765.43
      clat percentiles (usec):
      |  1.00th=[  334],  5.00th=[  406], 10.00th=[  482], 20.00th=[  580],
      | 30.00th=[  652], 40.00th=[  716], 50.00th=[  772], 60.00th=[  836],
      | 70.00th=[  892], 80.00th=[  980], 90.00th=[ 1128], 95.00th=[ 1304],
      | 99.00th=[ 1832], 99.50th=[ 2320], 99.90th=[20608], 99.95th=[38656],
      | 99.99th=[136192]
      bw (KB  /s): min=118739, max=444443, per=100.00%, avg=305665.86, stdev=75220.28
      write: io=841688KB, bw=76102KB/s, iops=12445, runt= 11060msec
      slat (usec): min=3, max=1020, avg= 9.17, stdev=20.35
      clat (usec): min=395, max=264907, avg=1552.83, stdev=5218.00
      lat (usec): min=402, max=264915, avg=1562.29, stdev=5217.88
      clat percentiles (usec):
      |  1.00th=[  644],  5.00th=[  804], 10.00th=[  908], 20.00th=[ 1032],
      | 30.00th=[ 1128], 40.00th=[ 1208], 50.00th=[ 1288], 60.00th=[ 1384],
      | 70.00th=[ 1480], 80.00th=[ 1624], 90.00th=[ 1864], 95.00th=[ 2128],
      | 99.00th=[ 2960], 99.50th=[ 4320], 99.90th=[61696], 99.95th=[121344],
      | 99.99th=[252928]
      bw (KB  /s): min=28575, max=111024, per=100.00%, avg=76804.24, stdev=18922.29
      lat (usec) : 250=0.01%, 500=9.42%, 750=27.66%, 1000=31.51%
      lat (msec) : 2=29.46%, 4=1.58%, 10=0.12%, 20=0.06%, 50=0.12%
      lat (msec) : 100=0.02%, 250=0.03%, 500=0.01%
      cpu  : usr=12.19%, sys=56.86%, ctx=17722, majf=0, minf=8
      IO depths  : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
      submit  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
      issued  : total=r=550156/w=137644/d=0, short=r=0/w=0/d=0
      latency  : target=0, window=0, percentile=100.00%, depth=64
    
    Run status group 0 (all jobs):
      READ: io=3274.5MB, aggrb=303130KB/s, minb=303130KB/s, maxb=303130KB/s, mint=11060msec, maxt=11060msec
      WRITE: io=841688KB, aggrb=76101KB/s, minb=76101KB/s, maxb=76101KB/s, mint=11060msec, maxt=11060msec
    
    Disk stats (read/write):
      dm-2: ios=543761/136160, merge=0/0, ticks=349496/178408, in_queue=527996, util=99.14%, aggrios=547260/137478, aggrmerge=2955/219, aggrticks=351272/179892, aggrin_queue=530992, aggrutil=98.92%
      sdk: ios=547260/137478, merge=2955/219, ticks=351272/179892, in_queue=530992, util=98.92%
    
    kvm jessie:
    /dev/sda1 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)

    Code:
    iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, iodepth=64
    fio-2.1.11
    Starting 1 process
    iometer: Laying out IO file(s) (1 file(s) / 4096MB)
    Jobs: 1 (f=1): [m(1)] [100.0% done] [13212KB/3056KB/0KB /s] [3031/705/0 iops] [eta 00m:00s]
    iometer: (groupid=0, jobs=1): err= 0: pid=1379: Sat Apr  9 15:35:36 2016
      Description  : [Emulation of Intel IOmeter File Server Access Pattern]
      read : io=3274.5MB, bw=13477KB/s, iops=2211, runt=248763msec
      slat (usec): min=1, max=245460, avg=14.67, stdev=589.03
      clat (usec): min=230, max=354068, avg=23064.99, stdev=17297.36
      lat (usec): min=641, max=354079, avg=23079.98, stdev=17304.66
      clat percentiles (usec):
      |  1.00th=[ 1128],  5.00th=[ 2800], 10.00th=[ 4960], 20.00th=[ 9152],
      | 30.00th=[13504], 40.00th=[17792], 50.00th=[22144], 60.00th=[26496],
      | 70.00th=[30848], 80.00th=[35072], 90.00th=[40192], 95.00th=[43264],
      | 99.00th=[55040], 99.50th=[64256], 99.90th=[254976], 99.95th=[268288],
      | 99.99th=[280576]
      bw (KB  /s): min= 5063, max=24580, per=100.00%, avg=13489.59, stdev=3681.45
      write: io=841688KB, bw=3383.6KB/s, iops=553, runt=248763msec
      slat (usec): min=3, max=255205, avg=35.01, stdev=1964.99
      clat (usec): min=903, max=346672, avg=23364.47, stdev=17357.57
      lat (usec): min=921, max=346689, avg=23399.84, stdev=17494.80
      clat percentiles (usec):
      |  1.00th=[ 1416],  5.00th=[ 3088], 10.00th=[ 5216], 20.00th=[ 9408],
      | 30.00th=[13760], 40.00th=[18048], 50.00th=[22400], 60.00th=[26752],
      | 70.00th=[31104], 80.00th=[35584], 90.00th=[40192], 95.00th=[43776],
      | 99.00th=[56064], 99.50th=[65280], 99.90th=[254976], 99.95th=[264192],
      | 99.99th=[284672]
      bw (KB  /s): min= 1075, max= 6659, per=100.00%, avg=3386.88, stdev=972.88
      lat (usec) : 250=0.01%, 500=0.01%, 750=0.05%, 1000=0.49%
      lat (msec) : 2=2.42%, 4=4.71%, 10=14.07%, 20=23.26%, 50=53.43%
      lat (msec) : 100=1.32%, 250=0.13%, 500=0.12%
      cpu  : usr=10.42%, sys=23.97%, ctx=673539, majf=0, minf=8
      IO depths  : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
      submit  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
      issued  : total=r=550156/w=137644/d=0, short=r=0/w=0/d=0
      latency  : target=0, window=0, percentile=100.00%, depth=64
    
    Run status group 0 (all jobs):
      READ: io=3274.5MB, aggrb=13477KB/s, minb=13477KB/s, maxb=13477KB/s, mint=248763msec, maxt=248763msec
      WRITE: io=841688KB, aggrb=3383KB/s, minb=3383KB/s, maxb=3383KB/s, mint=248763msec, maxt=248763msec
    
    Disk stats (read/write):
      sda: ios=539298/137278, merge=11458/836, ticks=12272328/3311076, in_queue=15582852, util=100.00%
    
     
    #11 RobFantini, Apr 9, 2016
    Last edited: Apr 9, 2016
  12. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,480
    Likes Received:
    96
    You should try adding mount option nobarrier.

    Was is your cache setting for the disk exposed to this KVM in proxmox?
     
  13. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,488
    Likes Received:
    21
    cache is set to writeback .

    regarding mount option : I have this in fstab:
    rw,relatime,nobarrier,data=ordered errors=remount-ro 0 1

    however mount | grep sda :
    /dev/sda1 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)

    I do not know why 'nobarrier' is showing in the mount command. I'll check this tomorrow. Can you check at your system?

    Also: how did the lxc vs kvm test look? I'm just learning fio so do not know how to evaluate the results.
     
  14. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,480
    Likes Received:
    96
    mount |grep sda6
    /dev/sda6 on / type ext4 (rw,relatime,nobarrier,data=ordered)
    /etc/fstab
    UUID=21ae3af6-9327-45b9-b7aa-13eb2a27c771 / ext4 nobarrier,defaults 0 2

    lxc looks excellent but kvm is a little disappointing. The reason for the bad kvm performance is caused by having nobarrier and using cache = writeback. If disks is running on top of ZFS you get best performance by using cache = nocache (the default).
     
  15. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,488
    Likes Received:
    21
    here is result with cache = nocache .
    Code:
    iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, iodepth=64
    fio-2.1.11
    Starting 1 process
    Jobs: 1 (f=1): [m(1)] [100.0% done] [13518KB/3106KB/0KB /s] [3080/712/0 iops] [eta 00m:00s]
    iometer: (groupid=0, jobs=1): err= 0: pid=741: Sat Apr  9 18:34:14 2016
      Description  : [Emulation of Intel IOmeter File Server Access Pattern]
      read : io=3274.5MB, bw=13636KB/s, iops=2237, runt=245858msec
      slat (usec): min=1, max=33704, avg=10.47, stdev=130.91
      clat (usec): min=103, max=303853, avg=22786.27, stdev=14097.24
      lat (usec): min=664, max=303862, avg=22797.06, stdev=14097.04
      clat percentiles (usec):
      |  1.00th=[ 1128],  5.00th=[ 2800], 10.00th=[ 4960], 20.00th=[ 9280],
      | 30.00th=[13504], 40.00th=[17792], 50.00th=[22144], 60.00th=[26752],
      | 70.00th=[31104], 80.00th=[35584], 90.00th=[40192], 95.00th=[43776],
      | 99.00th=[53504], 99.50th=[60160], 99.90th=[83456], 99.95th=[100864],
      | 99.99th=[252928]
      bw (KB  /s): min= 8046, max=26117, per=100.00%, avg=13642.08, stdev=3431.34
      write: io=841688KB, bw=3423.5KB/s, iops=559, runt=245858msec
      slat (usec): min=3, max=33451, avg=19.57, stdev=452.84
      clat (usec): min=899, max=303705, avg=23155.09, stdev=14109.95
      lat (usec): min=909, max=303718, avg=23174.99, stdev=14116.26
      clat percentiles (usec):
      |  1.00th=[ 1416],  5.00th=[ 3120], 10.00th=[ 5344], 20.00th=[ 9664],
      | 30.00th=[14016], 40.00th=[18304], 50.00th=[22656], 60.00th=[27008],
      | 70.00th=[31360], 80.00th=[35584], 90.00th=[40704], 95.00th=[43776],
      | 99.00th=[54016], 99.50th=[60672], 99.90th=[83456], 99.95th=[102912],
      | 99.99th=[257024]
      bw (KB  /s): min= 1717, max= 7110, per=100.00%, avg=3424.75, stdev=907.68
      lat (usec) : 250=0.01%, 500=0.01%, 750=0.05%, 1000=0.49%
      lat (msec) : 2=2.43%, 4=4.65%, 10=13.93%, 20=23.17%, 50=53.74%
      lat (msec) : 100=1.48%, 250=0.04%, 500=0.01%
      cpu  : usr=11.03%, sys=23.08%, ctx=673492, majf=0, minf=9
      IO depths  : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
      submit  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
      issued  : total=r=550156/w=137644/d=0, short=r=0/w=0/d=0
      latency  : target=0, window=0, percentile=100.00%, depth=64
    
    Run status group 0 (all jobs):
      READ: io=3274.5MB, aggrb=13636KB/s, minb=13636KB/s, maxb=13636KB/s, mint=245858msec, maxt=245858msec
      WRITE: io=841688KB, aggrb=3423KB/s, minb=3423KB/s, maxb=3423KB/s, mint=245858msec, maxt=245858msec
    
    Disk stats (read/write):
      sda: ios=539347/137164, merge=11335/717, ticks=12247048/3245308, in_queue=15492124, util=100.00%
    

    PS: note that nobarrier still does not show
    mount|grep sda
    /dev/sda1 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)
     
  16. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,480
    Likes Received:
    96
    Can you paste your fstab file?
     
  17. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,488
    Likes Received:
    21
    commented lines excluded:
    Code:
    UUID=016e4c68-b1e3-4275-b4db-e010f5c5650f / rw,relatime,nobarrier,data=ordered errors=remount-ro 0  1
    tmpfs  /var/cache/apt/archives  tmpfs size=1G,defaults,noexec,nosuid,nodev,mode=0755 0 0
    
     
  18. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,480
    Likes Received:
    96
    Your fstab line is wrong it should be:
    UUID=016e4c68-b1e3-4275-b4db-e010f5c5650f / ext4 rw,relatime,nobarrier,data=ordered errors=remount-ro 0 1
     
  19. RobFantini

    RobFantini Active Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,488
    Likes Received:
    21
    thanks for catching that.

    Code:
    # mount|grep sda
    /dev/sda1 on / type ext4 (rw,relatime,nobarrier,data=ordered)
    
    and fio test:
    Code:
    iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, iodepth=64
    fio-2.1.11
    Starting 1 process
    Jobs: 1 (f=1): [m(1)] [100.0% done] [13251KB/3173KB/0KB /s] [3093/735/0 iops] [eta 00m:00s]
    iometer: (groupid=0, jobs=1): err= 0: pid=695: Sat Apr  9 19:12:16 2016  
      Description  : [Emulation of Intel IOmeter File Server Access Pattern]  
      read : io=3274.5MB, bw=13694KB/s, iops=2247, runt=244825msec  
      slat (usec): min=1, max=34337, avg=10.52, stdev=160.01
      clat (usec): min=96, max=485692, avg=22701.29, stdev=14578.86
      lat (usec): min=660, max=485701, avg=22712.10, stdev=14578.48
      clat percentiles (usec):
      |  1.00th=[ 1112],  5.00th=[ 2800], 10.00th=[ 4960], 20.00th=[ 9152],
      | 30.00th=[13504], 40.00th=[17792], 50.00th=[22144], 60.00th=[26496],
      | 70.00th=[30848], 80.00th=[35072], 90.00th=[40192], 95.00th=[43264],
      | 99.00th=[52480], 99.50th=[60160], 99.90th=[85504], 99.95th=[113152],
      | 99.99th=[296960]
      bw (KB  /s): min= 5004, max=25066, per=100.00%, avg=13713.01, stdev=3454.14
      write: io=841688KB, bw=3437.1KB/s, iops=562, runt=244825msec
      slat (usec): min=3, max=32123, avg=17.92, stdev=409.82
      clat (usec): min=868, max=483835, avg=23016.88, stdev=14330.54
      lat (usec): min=881, max=483848, avg=23035.12, stdev=14335.88
      clat percentiles (usec):
      |  1.00th=[ 1416],  5.00th=[ 3056], 10.00th=[ 5216], 20.00th=[ 9536],
      | 30.00th=[13888], 40.00th=[18304], 50.00th=[22400], 60.00th=[26752],
      | 70.00th=[31104], 80.00th=[35584], 90.00th=[40192], 95.00th=[43776],
      | 99.00th=[53504], 99.50th=[61184], 99.90th=[85504], 99.95th=[98816],
      | 99.99th=[264192]
      bw (KB  /s): min= 1422, max= 6579, per=100.00%, avg=3443.28, stdev=926.27
      lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.04%, 1000=0.51%
      lat (msec) : 2=2.47%, 4=4.68%, 10=13.98%, 20=23.24%, 50=53.71%
      lat (msec) : 100=1.32%, 250=0.04%, 500=0.02%
      cpu  : usr=10.73%, sys=23.07%, ctx=673631, majf=0, minf=8
      IO depths  : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
      submit  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
      issued  : total=r=550156/w=137644/d=0, short=r=0/w=0/d=0
      latency  : target=0, window=0, percentile=100.00%, depth=64
    
    Run status group 0 (all jobs):
      READ: io=3274.5MB, aggrb=13693KB/s, minb=13693KB/s, maxb=13693KB/s, mint=244825msec, maxt=244825msec
      WRITE: io=841688KB, aggrb=3437KB/s, minb=3437KB/s, maxb=3437KB/s, mint=244825msec, maxt=244825msec
    
    Disk stats (read/write):
      sda: ios=539489/137206, merge=11367/712, ticks=12209328/3240400, in_queue=15451416, util=100.00%
    
    PS: still having lvm errors . those may be interfering with test.
    on your pve system do lvm commands like these work with out error? pvs and lvs give a lot of errors output.

    https://pve.proxmox.com/wiki/Iscsi/tests
     
    #19 RobFantini, Apr 10, 2016
    Last edited: Apr 27, 2016
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice