ISCSI gateway with proxmox

najwan

New Member
Jun 17, 2020
2
0
1
37
Hi I am new to proxmox and I am trying to test installing and managing ceph distributed storage through proxmox.
I deployed a cluster (monitor, manager and OSD ) on three VMs and I would likg to configure two iscsi gateway and creating targets and connecting windows machines to the cluster and provisioning disks. I am stuck how to configure the iscsi gateways can i do it through proxmox or there is another way to do it?

---------------
Thanks
Najwan
 
Currently I am trying to install a stand alone iscsi gateway on a centos VM and planning to integrate with the existing cluster can I go with this approach?

If not can u please guide from a high level how i can follow this thread to achieve my goal?
 
You could now consider running Ceph's multipath (highly available) iSCSI gateways, herewith the thread:
https://forum.proxmox.com/threads/pve-6-3-with-ha-ceph-iscsi.81991/

Depending on your use you may however prefer to simply deploy a minimal Debian VM and then feed it additional images which you then make available via iSCSI to other nodes. in our case we wanted to setup a Microsoft SQL AlwaysOn cluster and exported a drive to both Windows cluster members via iSCSI. This performs extremely well and is super reliable, herewith my speed notes on setting up an iSCSI exported image together with trim support (discard/unmap). You'll only need to create the /etc/rtslib-fb-target/pr directory and set the 'emulate_tpu' option if you need support for Windows clustering on iSCSI.

The notes initially referred to lvm-thin images but there are samples relating to a VM with an attached Ceph RBD image.

Code:
iSCSI (LIO (LinuxIO) which is kernel based):

View settings and disable:
  pico /etc/rtslib-fb-target/saveconfig.json;
  systemctl stop rtslib-fb-targetctl;
  systemctl disable rtslib-fb-targetctl;


  apt-get install targetcli-fb;

  targetcli
    cd /backstores/block
    create lvm0_iscsi-lair-labtech-sql /dev/lvm0/iscsi-lair-labtech-sql
    cd /iscsi
    create iqn.2019-08.reversefqdn:lair-labtech-sql                    # This is the target address
    cd iqn.2019-08.reversefqdn:lair-labtech-sql/tpg1/luns
    create /backstores/block/lvm0_iscsi-lair-labtech-sql
    cd ../acls
    create iqn.1991-05.com.microsoft:win-test                    # This is the client's address
    cd iqn.1991-05.com.microsoft:win-test                       
    set auth userid=lair-labtech-sql
    set auth password=********************
    #cd ../../portals
    #create
    cd /
    saveconfig
    exit
  pico /lib/systemd/system/rtslib-fb-targetctl.service; systemctl daemon-reload; systemctl enable rtslib-fb-targetctl;
[Unit]
Description=Restore LIO kernel target configuration
Requires=sys-kernel-config.mount
After=sys-kernel-config.mount network.target local-fs.target

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/targetctl restore
ExecStop=/usr/bin/targetctl clear
SyslogIdentifier=target

[Install]
WantedBy=multi-user.target


# Update everything, restart and then check:
  update-initramfs -u;
  update-grub;
  init 6;
  systemctl status rtslib-fb-targetctl;
  targetcli sessions;

Problems authenticating? Try setting credentials on the discovery portal with auto generated ACLs:
    cd /backstores/block
    create lvm0_iscsi-lair-labtech-sql /dev/lvm0/iscsi-lair-labtech-sql
    cd /iscsi
    create iqn.2019-08.reversefqdn:lair-labtech-sql
    cd iqn.2019-08.reversefqdn:lair-labtech-sql/tpg1/luns
    create /backstores/block/lvm0_iscsi-lair-labtech-sql
    cd ..
    set attribute authentication=0
    set attribute demo_mode_write_protect=0
    set attribute generate_node_acls=1
    cd /iscsi
    set discovery_auth enable=1
    set discovery_auth userid=IncomingUser
    set discovery_auth password=SomePassword1


  PS: Settings saved to: /etc/rtslib-fb-target/saveconfig.json


  NB: I strongly recommend iSCSI be configured to support 'discard' being passed through to underlying layers. This allows
      thinly provisioned LVM volumes to release unused space and is required to prevent SSD drives slowing down once they've
      written to all areas of the disc.

      Use 'lsblk -D' to view values supported by block devices, example:
        NAME                                       DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
        sda                                               0        0B       0B         0
        +-sda1                                            0        0B       0B         0
          +-md3                                      524288      512K     256M         0
            +-lvm0-pool--iscsi_tdata                 131072      512K     256M         0
              +-lvm0-pool--iscsi-tpool               131072      512K     256M         0
                +-lvm0-pool--iscsi                   131072      512K     256M         0
                +-lvm0-iscsi--lair--connect--sql          0       64K      16G         0
                +-lvm0-iscsi--lair--labtech--sql          0       64K      16G         0
                +-lvm0-iscsi--lair--nt01--bfw             0       64K      16G         0
      Obtain capabilities:
        dir /dev/lvm0/iscsi-lair-nt01-bfw
          lrwxrwxrwx 1 root root 8 Jul  4 19:54 /dev/lvm0/iscsi-lair-nt01-bfw -> ../dm-19
        cd /sys/block/dm-19;
          cat queue/logical_block_size;
          cat queue/physical_block_size;
          cat queue/hw_sector_size;
          cat queue/rotational;
          cat queue/discard_max_bytes;
          cat queue/discard_max_hw_bytes;
          cat queue/minimum_io_size;
          cat queue/optimal_io_size;
          cat queue/discard_granularity;
          cat discard_alignment;
          cat queue/discard_zeroes_data;
      #apt-get install sg3-utils
      #  sg_vpd -p bl -v /dev/lvm0/iscsi-lair-nt01-bfw
      Define the capabilities of the iSCSI target:
        targetcli
          cd /backstores/block/lvm0_iscsi-lair-nt01-bfw
          set attribute block_size=512                # logical_block_size
          set attribute emulate_tpu=1
          set attribute is_nonrot=1                # rotational (set LVM_thin, Ceph and flash storage to 1)
          set attribute max_unmap_block_desc_count=1
          set attribute max_unmap_lba_count=8192        # discard_max_bytes(*) / logical block size    = 4194304 / 512    # use lower one but limit calculated value to 8192
          set attribute optimal_sectors=768            # optimal_io_size / logical_block_size        = 393216 / 512
          set attribute unmap_granularity=128            # discard_granularity / logical_block_size    = 65536 / 512
          set attribute unmap_granularity_alignment=0        # discard_alignment / logical_block_size    = 0 / 512
          set attribute unmap_zeroes_data=0            # discard_zeroes_data
          get attribute


Sample iSCSI setup on Ceph RBD image (yes, as with LVM you need to check and update the attributes manually):

  targetcli
    cd /backstores/block
    create sqlalwayson-sql /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1
    cd /iscsi
    create iqn.2019-08.reversefqdn:sqlalwayson-sql                # This is the target address
    cd iqn.2019-08.reversefqdn:sqlalwayson-sql/tpg1/luns
    create /backstores/block/sqlalwayson-sql
    cd ../acls
    create iqn.1991-05.com.microsoft:sql2017-01.fqdn                # This is the client's address
    cd iqn.1991-05.com.microsoft:sql2017-01.fqdn
    set auth userid=sqlalwayson-sql
    set auth password=hdfursfhdfursf
    cd /backstores/block/sqlalwayson-sql
    set attribute emulate_tpu=1
    set attribute is_nonrot=1
    cd /
    saveconfig
    exit

# Fix bugs, looks like they set the wrong values for these
  pico /etc/rtslib-fb-target/saveconfig.json
    "optimal_sectors": 8192,
    "unmap_zeroes_data": 1
 
mkdir /etc/rtslib-fb-target/pr;
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!