[SOLVED] PVE 3.2 ZFS plugin with Zfs-on-Linux

rahman

Renowned Member
Nov 1, 2010
63
0
71
Hi,

I have a Debian server with Zfs-on-Linux and IET iscsi. I followed the wiki. here is storage.conf:

Code:
zfs: linux        blocksize 4k
        target iqn.2001-04.tr.xxx.xxx:elastics
        pool elastics
        iscsiprovider iet
        portal xxx.xxx.xxx.74
        content images

And Zfs list on storage server:

Code:
NAME                     USED  AVAIL  REFER  MOUNTPOINTelastics                 782G  8.16T   318G  /elastics
elastics/backup          286G  8.16T   286G  /backup
elastics/logs           7.71G  8.16T  7.71G  /logs
elastics/mrtg           10.3M  8.16T  10.3M  /mrtg
elastics/vm-114-disk-1  34.0G  8.19T    72K  -
elastics/vm-114-disk-2  34.0G  8.19T    72K  -
elastics/vm-114-disk-3  34.0G  8.19T    72K  -
elastics/vm-114-disk-4  34.0G  8.19T    72K  -
elastics/vm-114-disk-5  34.0G  8.19T    72K  -
As you can see when I try to add a disk on this zfs storage it creates the zvol but gives error about iscsi target.: No such file or directory. at /usr/share/perl5/PVE/Storage/LunCmd/Iet.pm line 376. (500)
Code:
Mar 13 09:38:15 kvm47 pvedaemon[4411]: <root@pam> update VM 114: -virtio1 linux:32
Mar 13 09:38:16 kvm47 pvedaemon[4411]: WARNING: Use of uninitialized value $tid in concatenation (.) or string at /usr/share/perl5/PVE/Storage/LunCmd/Iet.pm line 371.

And on storage server here is the error logs:
Code:
ar 13 09:39:18 graylog2 kernel: [2504456.932896]  zd80: unknown partition table
Mar 13 09:39:19 graylog2 ietd: unable to create logical unit 0 in target 0: 2

So how can I solve this issue as I understand it can't create iscsi target on storage server?
 
Last edited:
Re: PVE 3.2 ZFS plugin with Zfs-on-Linux

What output does the following produce on the server running zfsonlinux?

cat /proc/net/iet/volume

What version of Debian and ZFS?

What kernel version?
 
Last edited:
Re: PVE 3.2 ZFS plugin with Zfs-on-Linux

Hi,

cat /proc/net/iet/volume outputs nothing

Debian is up-to-date wheezy and zfsonlinux is
0.6.2-4~wheezy

And kernel is Linux graylog2 3.2.0-4-amd64 #1 SMP Debian 3.2.54-2 x86_64 GNU/Linux
 
Re: PVE 3.2 ZFS plugin with Zfs-on-Linux

Do you have the iscsitarget package installed?

dpkg -s iscsitarget
dpkg -s iscsitarget-dkms
 
Re: PVE 3.2 ZFS plugin with Zfs-on-Linux

Yes,
Code:
root@graylog2:~# dpkg -s iscsitargetPackage: iscsitarget
Status: install ok installed
Priority: optional
Section: net
Installed-Size: 201
Maintainer: Debian iSCSI Maintainers <pkg-iscsi-maintainers@lists.alioth.debian.org>
Architecture: amd64
Version: 1.4.20.2-10.1
Depends: libc6 (>= 2.4), procps, lsb-base (>= 3.2-14)
Recommends: iscsitarget-module
Suggests: iscsitarget-dkms
Conflicts: iscsitarget-source
Conffiles:
 /etc/init.d/iscsitarget e5bc255cd838d1af59bd18b20edf7933
 /etc/default/iscsitarget 4f8c844068b8099cf961221641f3f321
 /etc/iet/initiators.allow 3ce3bc152af1e9949f4cb123798e0afd
 /etc/iet/ietd.conf a9a651c3223062f28f692b05dae37483
 /etc/iet/targets.allow 58f71f4c9349f35ec89b80286b256cba
Description: iSCSI Enterprise Target userland tools
 iSCSI Enterprise Target is for building an iSCSI storage system on
 Linux. It is aimed at developing an iSCSI target satisfying enterprise
 requirements.
 .
 This package contains the userland part; you require the kernel module
 for proper operation.
Homepage: http://iscsitarget.sourceforge.net/
root@graylog2:~# dpkg -s iscsitarget-dkms
Package: iscsitarget-dkms
Status: install ok installed
Priority: optional
Section: net
Installed-Size: 299
Maintainer: Debian iSCSI Maintainers <pkg-iscsi-maintainers@lists.alioth.debian.org>
Architecture: all
Source: iscsitarget
Version: 1.4.20.2-10.1
Depends: dkms (>= 1.95), make
Recommends: linux-headers
Conflicts: iscsitarget-source
Description: iSCSI Enterprise Target kernel module source - dkms version
 iSCSI Enterprise Target is for building an iSCSI storage system on
 Linux. It is aimed at developing an iSCSI target satisfying enterprise
 requirements.
 .
 This package provides the source code for the iscsitarget kernel module.
 The iscsitarget package is also required in order to make use of this
 module. Kernel source or headers are required to compile this module.
 .
 This package contains the source to be built with dkms.
Homepage: http://iscsitarget.sourceforge.net/

As my first post says, ietd throws this error when I try to add disk via proxmox gui
Code:
[COLOR=#333333]ar 13 09:39:18 graylog2 kernel: [2504456.932896]  zd80: unknown partition table
[/COLOR][COLOR=#333333]Mar 13 09:39:19 graylog2 ietd: unable to create logical unit 0 in target 0: 2[/COLOR]
 
Re: PVE 3.2 ZFS plugin with Zfs-on-Linux

Silly me. Taget was commented out in ietd.conf. Sorry for the noise.
 
Re: PVE 3.2 ZFS plugin with Zfs-on-Linux

Spoke too soon. I stopped the VM and I can't start it anymore:

Code:
Mar 31 16:20:33 kvm47 pvedaemon[32088]: Could not find lu_name for zvol vm-127-disk-1 at /usr/share/perl5/PVE/Storage/ZFSPlugin.pm line 195.
Mar 31 16:20:33 kvm47 pvedaemon[29710]: <rduran@acu> end task UPID:kvm47:00007D58:0968E252:53396BA1:qmstart:127:rduran@acu: Could not find lu_name for zvol vm-127-disk-1 at /usr/share/perl5/PVE/Storage/ZFSPlugin.pm line 195.

It seems lu_name doesn't work. Here is zfs list and ietd volume:

Code:
root@graylog2:~# zfs listNAME                     USED  AVAIL  REFER  MOUNTPOINT
elastics                 791G  8.15T   466G  /elastics
elastics/backup          287G  8.15T   287G  /backup
elastics/logs           19.1G  8.15T  19.1G  /logs
elastics/mrtg           10.3M  8.15T  10.3M  /mrtg
elastics/vm-127-disk-1  19.1G  8.17T    72K  -
Code:
root@graylog2:~# cat /proc/net/iet/volume
tid:1 name:iqn.2001-04.tr.edu.artvin:elastics
        lun:0 state:0 iotype:blockio iomode:wt blocks:37748736 blocksize:512 path:/dev/elastics/vm-127-disk-1
 
Re: PVE 3.2 ZFS plugin with Zfs-on-Linux

You did do the following exactly as written?

[h=2]Platform notes[/h]
  • On all storage notes the following should be added to sshd_config:
    • LookupClientHostnames no
    • VerifyReverseMapping no
    • GSSAPIAuthentication no
  • After libpve-storage-perl-3.0-18 the following procedure must be used. For all storage platforms the distribution of root's ssh key is maintained through Proxmox's cluster wide file system which means you have to create this folder: /etc/pve/priv/zfs. In this folder you place the ssh key to use for each ZFS storage and the name of the key follows this naming scheme: <portal>_id_rsa. Portal is entered in the gui wizard's field portal so if a ZFS storage is referenced via the IP 192.168.1.1 then this IP is entered in the field portal and therefore the key will have this name: 192.168.1.1_id_rsa. Creating the key is simple. As root do the following:
    • mkdir /etc/pve/priv/zfs
    • ssh-keygen -f /etc/pve/priv/zfs/192.168.1.1_id_rsa
    • ssh-copy-id -i /etc/pve/priv/zfs/192.168.1.1_id_rsa root@192.168.1.1
    • test to proof it working: ssh -i /etc/pve/priv/zfs/192.168.1.1_id_rsa root@192.168.1.1. If you are logged in without errors you are ready to use your storage.
    • The key creation is only needed once for each portal so if the same portal provides several targets which is used for several storages in Proxmox you only create one key.
 
Re: PVE 3.2 ZFS plugin with Zfs-on-Linux

I have made a clean install of a Debian Wheezy with zfsonlinux to be absolutely sure to have a clean installation. I was not able to replicate your problems. If I however put a wrong target name or wrong pool name into storage.cfg I get similar error like you so therefore you should study your storage.cfg and make absolute certain that the names for target and pool matches your storage.
 
Re: PVE 3.2 ZFS plugin with Zfs-on-Linux

You did do the following exactly as written?

Platform notes


  • On all storage notes the following should be added to sshd_config:
    • LookupClientHostnames no
    • VerifyReverseMapping no
    • GSSAPIAuthentication no

Debbian wheezy throws these errors:
Code:
/etc/ssh/sshd_config: line 89: Bad configuration option: LookupClientHostnames
/etc/ssh/sshd_config line 90: Deprecated option VerifyReverseMapping

Without these two options it works. And I setup ssh keys. As I say creating disk is working so there should not be any ssh misconfigurations?

Here is my storage.cfg:

Code:
zfs: zfsonlinux                                                                                                                                                                                           
        blocksize 4k                                                                                                                                                                                      
        target iqn.2001-04.tr.edu.artvin:elastics                                                                                                                                                         
        pool elastics                                                                                                                                                                                     
        iscsiprovider iet                                                                                                                                                                                 
        portal xxx.xxx.xxx.xxx                                                                                                                                                                              
        content images

I really don't get how the plugin works. Does it use "ietadm" commands via ssh or does it use "/etc/iet/ietd.conf"? I add "Target iqn.2001-04.tr.edu.artvin:elastics" at the end of "/etc/iet/ietd.conf". Then I restart iscsi service so target created with tid:1
Code:
root@graylog2:~# cat /proc/net/iet/volume
tid:1 name:iqn.2001-04.tr.edu.artvin:elastics

Then I add a harddisk via Proxmox gui on this zfs storage and the lun and zfs zvol created with success:
Code:
root@graylog2:~# cat /proc/net/iet/volumetid:1 name:iqn.2001-04.tr.edu.artvin:elastics
        lun:0 state:0 iotype:blockio iomode:wt blocks:35651584 blocksize:512 path:/dev/elastics/vm-127-disk-1

root@graylog2:~# zfs list
NAME                     USED  AVAIL  REFER  MOUNTPOINT
elastics/vm-127-disk-1  18.1G  8.17T    72K  -


But the problem is when I start the VM it errors about "Could not find lu_name for zvol vm-127-disk-1" and when I check "/etc/iet/ietd.conf" file after creating first disk, the file is changed: the line I added ("Target iqn.2001-04.tr.edu.artvin:elastics") is deleted and there is extra line added which is commented out:
Code:
#QueuedCommands         32              # Number of queued commands     Lun 0 Path=/dev/elastics/vm-127-disk-1,Type=blockio     Lun 0 Path=/dev/elastics/vm-127-disk-1,Type=blockio

Why does my ietd config erased? With this config change if I restart iscsi service, all the target and lun conf,guration is lost.

I have made a clean install of a Debian Wheezy with zfsonlinux to be absolutely sure to have a clean installation. I was not able to replicate your problems. If I however put a wrong target name or wrong pool name into storage.cfg I get similar error like you so therefore you should study your storage.cfg and make absolute certain that the names for target and pool matches your storage.

Can use share your storage.cfg and ietd.conf with a VM disk created.
 
Re: PVE 3.2 ZFS plugin with Zfs-on-Linux

You have manually changed the ietd.conf at some point cause you have deleted the target reference. This is why the ZFSplugin is not able to parse your config in which case it will not be able to find your LUNS.
Code:
To fix the problem replace this line:
#QueuedCommands         32              # Number of queued commands     Lun 0 Path=/dev/elastics/vm-127-disk-1,Type=blockio     Lun 0 Path=/dev/elastics/vm-127-disk-1,Type=blockio
with this line:
#QueuedCommands         32              # Number of queued commands
Target iqn.2001-04.tr.edu.artvin:elastics
    Lun 0 Path=/dev/elastics/vm-127-disk-1,Type=blockio

After changing the file:
service iscsitarget restart

The plugin uses ietadm commands to make changes in real time without needing to restart ietd. /etc/iet/ietd.conf is used to persists changes made in real time with ietadm since changes made with ietadm commands does not survive a restart of ietd (or reboot for that matter).
 
Last edited:
Re: PVE 3.2 ZFS plugin with Zfs-on-Linux

Thank you mir, for your helps. It did the trick.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!