Ceph-iscsi howto ?

Gerhard W. Recher

Well-Known Member
Mar 10, 2017
158
8
58
Munich
Thank you Fabian,

any plans to merge this ?
or any other way with proxmox to accomplish the task ?
 
Thank you Fabian,

any plans to merge this ?
or any other way with proxmox to accomplish the task ?

there is no upstream support for Debian based distros (yet), so there is nothing to merge. I also don't really see a use case for PVE here.. you can already access Ceph storage clusters without another layer of iSCSI complexity inbetween..
 
we need this for some existing vm-ware servers, as long as they are alive we like to have a rbd-iscsi gateway, to shutdown a old san ....
 
has anyone installed userland tgt ? to provide a rbd-iscsi multipath solution ?
 
Just installed tgt
Code:
dpkg --list tgt*
||/ Name                           Version              Architecture         Description
+++-==============================-====================-====================-==================================================================
ii  tgt                            1:1.0.69-1           amd64                Linux SCSI target user-space daemon and tools
un  tgt-glusterfs                  <none>               <none>               (no description available)
ii  tgt-rbd                        1:1.0.69-1           amd64                Linux SCSI target user-space daemon and tools - RBD support
Code:
 tgt-admin -s
Target 1: iqn.2017-12.rbdstore.net4sec.com:iscsi
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
        I_T nexus: 1
            Initiator: iqn.1991-05.com.microsoft:wsus.net4sec.com alias: none
            Connection: 1
                IP Address: 192.168.221.124
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB, Block size: 1
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            SWP: No
            Thin-provisioning: No
            Backing store type: null
            Backing store path: None
            Backing store flags:
        LUN: 1
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: beaf11
            Size: 10737 MB, Block size: 512
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            SWP: No
            Thin-provisioning: No
            Backing store type: rbd
            Backing store path: vmpool/iscsi-rbd
            Backing store flags:
    Account information:
    ACL information:
        192.168.221.124
and made a short test with our 2016 Windows Server, performance is poor, almost setting tgt nr-treads 128

see screenshot, left drive r: is tgt target right ist t: virtio drive direct on rbd pool ....

any hints how to speed up things ?
 

Attachments

  • iscsi-versus-virtio.png
    iscsi-versus-virtio.png
    245.2 KB · Views: 101
To anyone interested I've built a kernel with ceph-iscsi support and am now in the process of building the userland tools. Will post howto when everything is running smoothly.
 
  • Like
Reactions: codingspiderfox
To anyone interested I've built a kernel with ceph-iscsi support and am now in the process of building the userland tools. Will post howto when everything is running smoothly.

Hi, did you have time to write the howto ? I'm lookin to packaking tools for debian. (I have a customer with a vmware cluster, looking to add ceph iscsi gateway, and currents packages are for redhat only. I'm looking for tcmu-runner,ceph-iscsi-config,ceph-iscsi-cli)
 
  • Like
Reactions: codingspiderfox