Shared storage suggestion for a 5 node cluster?

Discussion in 'Proxmox VE: Installation and configuration' started by locusofself, Mar 29, 2016.

Tags:
  1. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,483
    Likes Received:
    97
    My recommendation would be to create two different storages in proxmox, one using zfs_over_iscsi for KVM which provides all zfs features like (linked) clones, life snapshots, etc. and one using LVM with network backing as explained here: https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc. Pay attention to these to important recommendations:
    • disable 'use LUNs direcly'
    • Enable shared use (recommended)
    All the above can be done from a single zfs pool.
    Manually create a volume and share this volume through an iscsi target. Using this target for the iscsi plugin to create a shared LUN for proxmox on which you create a lvm storage with network backing. Use the same zfs pool for configuration of the zfs_over_iscsi storage to use for KVM's. The zfs_over_iscsi plugin will not overwrite the zvol used for your iscsi target for lvm storage. In this way you have options to running cluster wide VM's using both kvm and lxc which can be life migrated across the cluster either manually or through HA. live migration for lxc is still in the making but will enter proxmox before you know it;-)
     
  2. sdinet

    sdinet Member

    Joined:
    Feb 24, 2016
    Messages:
    69
    Likes Received:
    0
    I just discovered what napp-it is. I am doing a similar storage setup. What are you doing to protect your VM disk images while they transit the internet?

    How is the performace of your VMs with this setup? I would imagine that every VM disk read/write operation takes significantly longer than locally stored images.....
     
  3. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,483
    Likes Received:
    97
    What do you mean by 'transit the internet'? My storage is connected to proxmox on a closed network.

    I have made some performance tests in this thread:
    https://forum.proxmox.com/threads/iscsi-san-presented-as-nfs-using-freenas.26679/#post-133999
     
  4. RobFantini

    RobFantini Well-Known Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,596
    Likes Received:
    26
    I am a little confused on setting up the LVM .


    "Manually create a volume and share this volume through an iscsi target"

    is the volume created on napp-it ?

    then at pve use storage > add > iscsi ?
     
  5. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,483
    Likes Received:
    97
    Yes to both questions.
     
  6. RobFantini

    RobFantini Well-Known Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,596
    Likes Received:
    26
    OK getting close .
    I'm stuck at 'add an LVM group on this target.' .

    here I've done so far to try setup

    0- for kvm use zfs over iscsi , storage.cfg result:
    Code:
    zfs: iscsi-sys4
      target iqn.2010-09.org.napp-it:1459891666
      iscsiprovider comstar
      blocksize 8k
      portal 10.2.2.41
      pool data
      content images
      nowritecache
    
    1-Manually create a volume

    napp-it disks > volumes < create volume : name lvmvol , size 300G , un check thin provisioned.


    2- share this volume through an iscsi target. pve storage > add iscsi >
    storage.cfg result:
    Code:
    iscsi: sys4-lvmvol
      target iqn.2010-09.org.napp-it:1459891666
      portal 10.2.2.41
      content none
    
    3- add an LVM group on this target.
    storage > add LVM
    name: iscsi-lvm-for-lxc

    For 'Base Storage', use the drop down menu to select the previously defined iSCSI target.
    sys4-lvmvol (iSCSI)

    For 'Base Volume' select a LUN

    **there are none to choose from ** <<<<<<<<<<<<<<<<< Issue to fix. must have skipped a step or did wrong. TBD
     
  7. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,483
    Likes Received:
    97
  8. RobFantini

    RobFantini Well-Known Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,596
    Likes Received:
    26
    progress.

    does this look sane?

    Code:
    
    zfs: iscsi-sys4
      target iqn.2010-09.org.napp-it:1459891666
      iscsiprovider comstar
      blocksize 8k
      portal 10.2.2.41
      pool data
      content images
      nowritecache
    
    iscsi: sys4-lvmvol
      target iqn.2010-09.org.napp-it:1459891666
      portal 10.2.2.41
      content none
    
    lvm: iscsi-lvm-for-lxc
      vgname iscsi-lxc-vg
      base sys4-lvmvol:0.0.0.scsi-3600144f000000808000057056d6d0001
      content rootdir
      shared
    
    Code:
    # service open-iscsi restart
    # dmesg -c
    
    [125391.042821] Loading iSCSI transport class v2.0-870.
    [125391.048593] iscsi: registered transport (tcp)
    [125391.066692] iscsi: registered transport (iser)
    [125397.333065] scsi host11: iSCSI Initiator over TCP/IP
    [125397.340885] scsi host12: iSCSI Initiator over TCP/IP
    [125397.850368] scsi 12:0:0:0: Direct-Access  SUN  COMSTAR  1.0  PQ: 0 ANSI: 5
    [125397.850498] scsi 11:0:0:0: Direct-Access  SUN  COMSTAR  1.0  PQ: 0 ANSI: 5
    [125397.851413] sd 12:0:0:0: Attached scsi generic sg7 type 0
    [125397.851659] sd 11:0:0:0: Attached scsi generic sg8 type 0
    [125397.851927] sd 12:0:0:0: [sdh] 629145600 512-byte logical blocks: (322 GB/300 GiB)
    [125397.852395] sd 11:0:0:0: [sdi] 629145600 512-byte logical blocks: (322 GB/300 GiB)
    [125397.853221] sd 12:0:0:0: [sdh] Write Protect is off
    [125397.853225] sd 12:0:0:0: [sdh] Mode Sense: 53 00 00 00
    [125397.853497] sd 12:0:0:0: [sdh] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
    [125397.853693] sd 11:0:0:0: [sdi] Write Protect is off
    [125397.853695] sd 11:0:0:0: [sdi] Mode Sense: 53 00 00 00
    [125397.854146] sd 11:0:0:0: [sdi] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
    [125397.857212] sd 12:0:0:0: [sdh] Attached SCSI disk
    [125397.859966] sd 11:0:0:0: [sdi] Attached SCSI disk
    
    PS: I'll try to make a corrected step by step document. should it stay here or go to wiki?
     
  9. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,483
    Likes Received:
    97
    Since you have been able to create the VG there must be a connection.

    PS. Do you export two LUN's of equal size or is it the same LUN connected twice?
    In later case this is dangerous.

    I would add it to the wiki.
     
  10. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,483
    Likes Received:
    97
    I would be nice to see some performance tests from inside a LXC container.
     
  11. RobFantini

    RobFantini Well-Known Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,596
    Likes Received:
    26
    thanks for catching that.

    supposed to have just one LUN. the other must be hang over from earlier attempt . Will try to fix..

    this is a test system, I'll start over following updated instructions later on.
     
  12. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,483
    Likes Received:
    97
  13. RobFantini

    RobFantini Well-Known Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,596
    Likes Received:
    26
    I assume you are referring to the output from 'service open-iscsi restart' showing 2 'disks' [ sdh and sdi ]? I'm not familiar with how that is supposed to look.
    Code:
    [125397.851927] sd 12:0:0:0: [sdh] 629145600 512-byte logical blocks: (322 GB/300 GiB)
    [125397.852395] sd 11:0:0:0: [sdi] 629145600 512-byte logical blocks: (322 GB/300 GiB)
    
    napp-it shows just one logical-unit .
    storage.cfg shows one lvm for iscsi .

    any clues on where to remove the extra unit?
     
  14. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,483
    Likes Received:
    97
    Are there multiple paths to the storage? Misconfigured multipath could cause the problem you are experiencing.
     
  15. RobFantini

    RobFantini Well-Known Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,596
    Likes Received:
    26
    I'll study up on ' multipath ' .

    is there a menu on napp-it for multipath? [ I could not find it. ]

    napp-it has two interfaces . I tried to make it so only the storage network IP was used at:
    comstar >create portal-group . name portal-group-1 ( use 10.2.2.41 storage network )

    prior to setting up iscsi at nappit gui , I did this from cli :
    Code:
    svcadm enable -r svc:/network/iscsi/target:default
    svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances.
    
    I am not sure if the warning about 'multiple instances' need to be dealt with.
     
  16. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,483
    Likes Received:
    97
    Unless you have created two views to the LUN in omnios the problem has to be found on the proxmox side.
     
    #36 mir, Apr 7, 2016
    Last edited: Apr 7, 2016
  17. RobFantini

    RobFantini Well-Known Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,596
    Likes Received:
    26
    there is only one view at omnios

    at pve /etc/iscsi/nodes on pve, both IP addresses have configuration set up:
    Code:
    # ls -lR /etc/iscsi/nodes
    /etc/iscsi/nodes:
    total 1
    drw------- 4 root root 4 Apr  6 16:37 iqn.2010-09.org.napp-it:1459891666/
    
    /etc/iscsi/nodes/iqn.2010-09.org.napp-it\:1459891666:
    total 1
    drw------- 2 root root 3 Apr  6 16:37 10.1.10.41,3260,1/
    drw------- 2 root root 3 Apr  6 16:37 10.2.2.41,3260,1/
    
    /etc/iscsi/nodes/iqn.2010-09.org.napp-it\:1459891666/10.1.10.41,3260,1:
    total 5
    -rw------- 1 root root 1839 Apr  6 16:37 default
    
    /etc/iscsi/nodes/iqn.2010-09.org.napp-it\:1459891666/10.2.2.41,3260,1:
    total 5
    -rw------- 1 root root 1838 Apr  6 16:37 default
    
    systemctl status iscsi :
    Code:
    # systemctl -l status  iscsi
    ● open-iscsi.service - LSB: Starts and stops the iSCSI initiator services and logs in to default targets
      Loaded: loaded (/etc/init.d/open-iscsi)
      Drop-In: /lib/systemd/system/open-iscsi.service.d
      └─fix-systemd-deps.conf
      Active: active (running) since Wed 2016-04-06 16:37:13 EDT; 13h ago
      Process: 23391 ExecStop=/etc/init.d/open-iscsi stop (code=exited, status=0/SUCCESS)
      Process: 23378 ExecStop=/etc/init.d/umountiscsi.sh stop (code=exited, status=0/SUCCESS)
      Process: 23449 ExecStart=/etc/init.d/open-iscsi start (code=exited, status=0/SUCCESS)
      CGroup: /system.slice/open-iscsi.service
      ├─23465 /usr/sbin/iscsid
      └─23466 /usr/sbin/iscsid
    
    Apr 06 16:37:13 sys5 open-iscsi[23449]: Starting iSCSI initiator service: iscsidln: failed to create symbolic link ‘/run/sendsigs.omit.d/iscsid.pid’: File exists
    Apr 06 16:37:13 sys5 open-iscsi[23449]: .
    Apr 06 16:37:13 sys5 open-iscsi[23449]: Setting up iSCSI targets:
    Apr 06 16:37:13 sys5 open-iscsi[23449]: iscsiadm: No records found
    Apr 06 16:37:13 sys5 open-iscsi[23449]: .
    Apr 06 16:37:13 sys5 open-iscsi[23449]: Mounting network filesystems:.
    Apr 06 16:37:13 sys5 open-iscsi[23449]: Enabling network swap devices:.
    Apr 06 16:37:14 sys5 iscsid[23465]: iSCSI daemon with pid=23466 started!
    Apr 06 16:37:15 sys5 iscsid[23465]: Connection1:0 to [target: iqn.2010-09.org.napp-it:1459891666, portal: 10.2.2.41,3260] through [iface: default] is operational now
    Apr 06 16:37:15 sys5 iscsid[23465]: Connection2:0 to [target: iqn.2010-09.org.napp-it:1459891666, portal: 10.1.10.41,3260] through [iface: default] is operational now
    
    
    
    I assume the two IP based iscsi configurations is the cause of 'same LUN connected twice' ?
     
  18. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,483
    Likes Received:
    97
    Yes, your omnios box is accessible to proxmox from two different IP's:
    - 10.1.10.41
    - 10.2.2.41
     
  19. RobFantini

    RobFantini Well-Known Member
    Proxmox Subscriber

    Joined:
    May 24, 2012
    Messages:
    1,596
    Likes Received:
    26
    I agree.

    however above you wrote:
    'Are there multiple paths to the storage? Misconfigured multipath could cause the problem you are experiencing.'

    could the two different IP connections cause 'Misconfigured multipath'
     
  20. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,483
    Likes Received:
    97
    If two paths exists to your storage and this is intentional you must install multipath on every proxmox host otherwise chances are eminent that you will mess-up your storage. Alternative you can create a bond to your storage. If the bond is to provide real HA it must span over two switches and to be able to do this it will require stackable switches.
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice