IET Target problem when move from one iscsi to another

lavr

New Member
Feb 12, 2016
3
0
1
61
Hi All,

first of all, i want to say thanks for great Virtual Environment.
I have some troubles with using IET Target on Linux for ZFS over iSCSI,
for move operation from one iscsi storage to another.
Below: versions, config and error results.

If needs more info, I'll post it. Thx

Tested on Proxmox VE 4.2/4.3

Proxmox 4.3 HA Cluster Config:
nodes: noc16, noc17, noc25

Code:
root@noc16:~# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
159.93.11.16 noc16.local noc16 pvelocalhost
#
159.93.11.17  noc17.local  noc17
159.93.11.25  noc25.local  noc25
#
159.93.11.18  zfsnoc1.local  zfsnoc1
159.93.11.28  zfsnoc2.local  zfsnoc2
#
159.93.11.38  zfsnoc3.jinr.ru zfsnoc3
159.93.11.48  zfsnoc4.jinr.ru zfsnoc4
#
10.93.11.16  ibnoc16.local  ibnoc16
10.93.11.17  ibnoc17.local  ibnoc17
10.93.11.25  ibnoc25.local  ibnoc25
#
10.93.11.18  ibzfsnoc1.local ibzfsnoc1
10.93.11.28  ibzfsnoc2.local ibzfsnoc2
Cluster status & nodes:
Code:
root@noc16:~# pvecm nodes
Membership information
----------------------
  Nodeid  Votes Name
  1  1 noc16 (local)
  2  1 noc17
  3  1 noc25

storage config:

Code:
root@noc16:~# cat /etc/pve/storage.cfg
zfs: iscsi3
  pool raid50/iscsi
  blocksize 4k
  iscsiprovider iet
  target iqn.2016-01.ru.jinr.zfsnoc3:noc3iscsi
  portal 159.93.11.38
  nowritecache 0
  sparse 0
  content images

zfs: iscsi4
  pool raid50/iscsi
  blocksize 4k
  iscsiprovider iet
  target iqn.2016-01.ru.jinr.zfsnoc4:noc4iscsi
  portal 159.93.11.48
  nowritecache 0
  sparse 0
  content images

servers with IET Target ZFS over iSCSI on Debian 8.6 (tested on 8.3/4/5)

two storage servers: zfsnoc3 & zfsnoc4 with the same config

1st zfsnoc3: ZFS & IET iSCSI config:
------------------------------------
Code:
root@zfsnoc3:~# cat /etc/debian_version
8.6
root@zfsnoc3:~# dpkg -l | grep zfs
ii  debian-zfs  7~jessie  amd64  Nat
ive ZFS filesystem metapackage for Debian.
ii  libzfs2  0.6.5.7-8-jessie  amd64  Nat
ive ZFS filesystem library for Linux
ii  zfs-dkms  0.6.5.7-8-jessie  all  Nat
ive ZFS filesystem kernel modules for Linux
ii  zfsonlinux  8  all  arc
hive.zfsonlinux.org trust package
ii  zfsutils  0.6.5.7-8-jessie  amd64  com
mand-line tools to manage ZFS filesystems
root@zfsnoc3:~# cat /etc/debian_version
8.6
root@zfsnoc3:~# dpkg -l | grep iscsi
ii  iscsitarget  1.4.20.3+svn502-1  amd64  iSC
SI Enterprise Target userland tools
ii  iscsitarget-dkms  1.4.20.3+svn502-1  all  iSC
SI Enterprise Target kernel module source - dkms version
Code:
root@zfsnoc3:~# zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
raid10  648K  3.51T  96K  /raid10
raid10/backup  96K  3.51T  96K  /raid10/backup
raid10/iscsi  96K  3.51T  96K  /raid10/iscsi
raid10/nfs  96K  3.51T  96K  /raid10/nfs
raid50  3.71G  10.5T  153K  /raid50
raid50/backup  153K  10.5T  153K  /raid50/backup
raid50/iscsi  153K  10.5T  153K  /raid50/iscsi
raid50/nfs  3.71G  10.5T  3.71G  /raid50/nfs
Code:
root@zfsnoc3:~# cat /etc/iet/ietd.conf
Target iqn.2016-01.ru.jinr.zfsnoc3:noc3iscsi
Code:
root@zfsnoc3:~# cat /proc/net/iet/volume
tid:1 name:iqn.2016-01.ru.jinr.zfsnoc3:noc3iscsi
root@zfsnoc3:~# cat /proc/net/iet/session
tid:1 name:iqn.2016-01.ru.jinr.zfsnoc3:noc3iscsi
2nd zfsnoc4: ZFS & IET iSCSI config:
Code:
root@zfsnoc4:~# zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
raid10  648K  3.51T  96K  /raid10
raid10/backup  96K  3.51T  96K  /raid10/backup
raid10/iscsi  96K  3.51T  96K  /raid10/iscsi
raid10/nfs  96K  3.51T  96K  /raid10/nfs
raid50  1.22M  10.5T  153K  /raid50
raid50/backup  153K  10.5T  153K  /raid50/backup
raid50/iscsi  153K  10.5T  153K  /raid50/iscsi
raid50/nfs  153K  10.5T  153K  /raid50/nfs

Code:
root@zfsnoc4:~# cat /etc/iet/ietd.conf
Target iqn.2016-01.ru.jinr.zfsnoc4:noc4iscsi
Code:
root@zfsnoc4:~# cat /proc/net/iet/volume
tid:1 name:iqn.2016-01.ru.jinr.zfsnoc4:noc4iscsi
root@zfsnoc4:~# cat /proc/net/iet/session
tid:1 name:iqn.2016-01.ru.jinr.zfsnoc4:noc4iscsi

Create 1st KVM VM via WEB-GUI on node: noc16:

Code:
root@noc16:~# qm list
  VMID NAME  STATUS  MEM(MB)  BOOTDISK(GB) PID
  100 testvm  stopped  512  32.00 0
root@noc16:~# qm config 100
bootdisk: virtio0
cores: 1
ide2: nfs3:iso/debian-8.6.0-amd64-netinst.iso,media=cdrom
memory: 512
name: testvm
net0: virtio=CA:F5:DE:85:BA:5A,bridge=vmbr0
numa: 0
ostype: l26
scsihw: virtio-scsi-pci
smbios1: uuid=ab96042a-8b19-4695-9cc9-3457c61991d5
sockets: 1
virtio0: iscsi3:vm-100-disk-1,size=32G
Move Disk Image from iscsi3 storage(zfsnoc3) to iscsi4 storage(zfsnoc4),
via WEB-GUI or CLI:
Code:
root@noc16:~# qm move_disk 100 virtio0 iscsi4 -delete 1
create full clone of drive virtio0 (iscsi3:vm-100-disk-1)
storage migration failed: /dev/raid50/iscsi/vm-100-disk-1: LUN exists at /usr/sh
are/perl5/PVE/Storage/LunCmd/Iet.pm line 362.
ZFS Storage iscsi3 (zfsnoc3)
Code:
root@zfsnoc3:~# zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
raid10  648K  3.51T  96K  /raid10
raid10/backup  96K  3.51T  96K  /raid10/backup
raid10/iscsi  96K  3.51T  96K  /raid10/iscsi
raid10/nfs  96K  3.51T  96K  /raid10/nfs
raid50  37.7G  10.5T  153K  /raid50
raid50/backup  153K  10.5T  153K  /raid50/backup
raid50/iscsi  34.0G  10.5T  153K  /raid50/iscsi
raid50/iscsi/vm-100-disk-1  34.0G  10.5T  102K  -
raid50/nfs  3.71G  10.5T  3.71G  /raid50/nfs
Code:
root@zfsnoc3:~# cat /etc/iet/ietd.conf
Target iqn.2016-01.ru.jinr.zfsnoc3:noc3iscsi
  Lun 0 Path=/dev/raid50/iscsi/vm-100-disk-1,Type=blockio
root@zfsnoc3:~# cat /proc/net/iet/volume
tid:1 name:iqn.2016-01.ru.jinr.zfsnoc3:noc3iscsi
  lun:0 state:0 iotype:blockio iomode:wt blocks:67108864 blocksize:512 pat
h:/dev/raid50/iscsi/vm-100-disk-1
root@zfsnoc3:~# cat /proc/net/iet/session
tid:1 name:iqn.2016-01.ru.jinr.zfsnoc3:noc3iscsi

now see iscsi4 (zfsnoc4) storage and config:

Code:
root@zfsnoc4:~# cat /etc/iet/ietd.conf
Target iqn.2016-01.ru.jinr.zfsnoc4:noc4iscsi
root@zfsnoc4:~# cat /proc/net/iet/volume
tid:1 name:iqn.2016-01.ru.jinr.zfsnoc4:noc4iscsi
root@zfsnoc4:~# cat /proc/net/iet/session
tid:1 name:iqn.2016-01.ru.jinr.zfsnoc4:noc4iscsi
root@zfsnoc4:~# zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
raid10  648K  3.51T  96K  /raid10
raid10/backup  96K  3.51T  96K  /raid10/backup
raid10/iscsi  96K  3.51T  96K  /raid10/iscsi
raid10/nfs  96K  3.51T  96K  /raid10/nfs
raid50  34.0G  10.5T  153K  /raid50
raid50/backup  153K  10.5T  153K  /raid50/backup
raid50/iscsi  34.0G  10.5T  153K  /raid50/iscsi
raid50/iscsi/vm-100-disk-1  34.0G  10.5T  102K  -
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
raid50/nfs  153K  10.5T  153K  /raid50/nfs
We can see that on iscsi4 was created zvol: vm-100-disk-1, but operation didn't complete and vm config didn't
change, because done with error:
Code:
root@noc16:~# qm config 100
bootdisk: virtio0
cores: 1
ide2: nfs3:iso/debian-8.6.0-amd64-netinst.iso,media=cdrom
memory: 512
name: testvm
net0: virtio=CA:F5:DE:85:BA:5A,bridge=vmbr0
numa: 0
ostype: l26
scsihw: virtio-scsi-pci
smbios1: uuid=ab96042a-8b19-4695-9cc9-3457c61991d5
sockets: 1
virtio0: iscsi3:vm-100-disk-1,size=32G

Move from one iSCSI Storage to another, don't work on IET Target.
After future experminet, Proxmox crash my /etc/iet/ietd.conf
on first or second ZFS Storage servers, for example, on zfsnoc4
was:
Code:
Target iqn.2016-01.ru.jinr.zfsnoc4:noc4iscsi
after incomplete move may be rewrite to target definition from zfsnoc3,
as:
Code:
Target iqn.2016-01.ru.jinr.zfsnoc3:noc3iscsi
???

What's wrong?

Proxmox don't supported MOVE from one ZFSoveriSCSI target to another on IET Target?

As I wrote above, this was tested on Proxmox 4.2/4.3 with IET Target,
but seems on Proxmox 4.1 also didn't work.

Resume of my test labs, which i tried:
- FreeBSD & any based NAS on release higher 10.x iSCSI target
don't supported by Proxmox
- IET Target (any Linux'es) don't have FULL support by Proxmox

Only Solaris-based iSCSI Comstar Target have Full Proxmox support.

Is it so, what OS can you recommend:
- OpenIndiana (it's my preffer)
- OmniOS
?

Thanks for great jobs and good VE.

Best reagrds
--
lavr
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!