Proxmox VE 3.3 and Freenas 9.2.7 ZFS via ISCSI

MisterIX

New Member
Oct 13, 2014
14
0
1
Dear forum guests,

I'm testing my Proxmox server to run with a freenas ZFS Volume via ISCSI. What worked fine so far, is to just create a dataset in freenas and then link a file extend into an iscsi target. That target can be mounted in Proxmox and then an LVM can be created on top of it. That runs fine, but I cannot use the snapshot function in Proxmox, as the hdds are always created as raw image.

Now I tried do follow up on the configuration from this site: http://pve.proxmox.com/wiki/Storage:_ZFS .

The manual proposes following:
- on freenas add some entries to /etc/ssh/sshd_config (done)
- on proxmox ve server make directory /etc/pve/priv/zfs and create a key with ssh-keygen (done)

the following step caused some problems:
ssh-copy-id -i /etc/pve/priv/zfs/192.168.1.1_id_rsa root@192.168.1.1

First thing I had to learn, is that freenas has a special setting for password authentication via ssh:

Freenas_ssh.jpg


Ok, once that was set I did the ssh-copy-id -i (with the correct IP) and received a "Connection closed by 192.168.1.1" .

Now when I do


ssh -i /etc/pve/priv/zfs/192.168.1.1_id_rsa root@192.168.1.1

from my Proxmox server I receive:

Last login: Fri Oct 17 11:21:05 2014 from 192.168.1.220
FreeBSD 9.2-RELEASE-p10 (FREENAS.amd64) #0 r262572+4fb5adc: Wed Aug 6 17:07:16 PDT 2014

FreeNAS (c) 2009-2014, The FreeNAS Development Team
All rights reserved.
FreeNAS is released under the modified BSD license.

For more information, documentation, help or support, go here:
http://freenas.org
Welcome to FreeNAS
[root@freenas] ~#

So, that looked like it worked. But when I mount the ZFS in Proxmox via the ZFS-plugin
I cannot access the ZFS Storage. I used following entries in the Plugin:

ID : a whatsoever name e.g. ZFS-Dataset
Portal: the IP of my storage server e.g. 192.168.1.1
Pool: the name of my first ZFS-Volume e.g. ZFSRAID
Block Size: 4k
Target: e.g. iqn.vmpool.mydomain.local

Target Group, Host group: left empty

Nodes: All
Enable: checked
iSCSI Provider: istgt

Thin provision and Write cache checked. The ZFS-Dataset is then shown under storage,
but when I check on the server node for my ZFS-Dataset I see the following:

Proxmox_failure.jpg

I hope its readable, the grafic seems a little small in the editor. Well that is the situation, and i wonder
if someone had similar difficulties. Any help will be greatly appreciated.

With kind regards, MisterIX.






 
Last edited:
First be aware of this: When creating volumes from the command line this volumes will not shop up in Freenas since Freenas' GUI works with an internal database. So anything not created within the Freenas GUI is unknown to Freenas.

If you login to Freenas what does the following show:
zpool status
zfs list
 
Hello mir,

I didnot create a volume by the command line. I used the ZFS Volume Manger to create a Raid Z2 Volume. After that I created a Dataset, that is ment to hold the extend for the iSCSI target. Everything via GUI.

zpool status shows:

pool: MYRaidZ2
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKS UM
MYRaidZ2 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/50d8265c-4e1f-11e4-8590-a0369f4c3d34 ONLINE 0 0 0
gptid/50ed841b-4e1f-11e4-8590-a0369f4c3d34 ONLINE 0 0 0
gptid/51029f8c-4e1f-11e4-8590-a0369f4c3d34 ONLINE 0 0 0
gptid/5117611e-4e1f-11e4-8590-a0369f4c3d34 ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
gptid/512c8c54-4e1f-11e4-8590-a0369f4c3d34 ONLINE 0 0 0
gptid/5142379c-4e1f-11e4-8590-a0369f4c3d34 ONLINE 0 0 0
gptid/51597057-4e1f-11e4-8590-a0369f4c3d34 ONLINE 0 0 0
gptid/516f8f51-4e1f-11e4-8590-a0369f4c3d34 ONLINE 0 0 0

errors: No known data errors

zfs list returns:
NAME USED AVAIL REFER MOUNTPOINT
MYRaidZ2 5.37G 10.4T 232K /mnt/MYRaidZ2
MYRaidZ2/.system 1.85M 10.4T 256K /mnt/MYRaidZ2/.system
MYRaidZ2/.system/cores 209K 10.4T 209K /mnt/MYRaidZ2/.system/cores
MYRaidZ2/.system/rrd 209K 10.4T 209K /mnt/MYRaidZ2/.system/rrd
MYRaidZ2/.system/samba4 552K 10.4T 552K /mnt/MYRaidZ2/.system/samba4
MYRaidZ2/.system/syslog 668K 10.4T 668K /mnt/MYRaidZ2/.system/syslog
MYRaidZ2/Dataset1 5.37G 10.4T 5.37G /mnt/MYRaidZ2/Dataset1

With kind regards, MisterIX.
 
Hi mir,

thank you very much for this hint. I'll keep up with the topic during next week.

Have a nice weekend! MisterIX
 
Hello mir,

it took me a while to set up the proxmox host correctly, with all the raid-functionality I wanted. In the end I reinstalled on a hardware raid controller. Now I'm again at the point when I want to wed my two servers via ISCSI and ZFS. Well I hadn't touched my freenas in the meantime, and now I changed the extend to a ZVOL and created a new target\extend based on the newly created device.

Now when I again try to do the copying of the new ssh file :

ssh-copy-id -i /etc/pve/priv/zfs/192.168.1.1_id_rsa root@192.168.1.1

I receive the message: Ambigous output redirect.

root@v-server:~# ssh-copy-id -i /etc/pve/priv/zfs/192.168.1.1_id_rsa root@192.168.1.1

The authenticity of host '192.168.1.1 (192.168.1.1)' can't be established.
ECDSA key fingerprint is 58:f9:71:c6:6a:4d:33:4e:81:55:b0:b7:fb:54:0a:89.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.1.1' (ECDSA) to the list of known hosts.
root@192.168.1.1's password:******
Ambiguous output redirect.

I couldn't find a real solution for this problem yet. Last time it seems I overcame it by accident. I thought it might have to do with the fact, that I copied the 192.168.1.1_id_rsa.pub in an older version to the root of my freenas, but it never seems to have arrived there. Allthough there is a folder /.ssh in my frenas's root it doesn't contain an "authorized_keys" file.

Have you maybe got an idea what is going wrong at this point of the configuration?

With kind regards, MisterIX.
 
Last edited:
Hello mir and other interested readers,

Ok, so I DID find a workaround for copying the IP_id_rsa.pub into the /.ssh/authorized_keys file on my freenas. 2nd BIG thing I had to learn on freenas is that the whole filesystem is mounted read only.
Following course of action was working for me:

On freenas do: mount -uw /
this will put the filesystem into writeable mode

instead of using ssh-copy-id use:
cat /etc/pve/priv/zfs/192.168.1.1_id_rsa.pub | ssh root@192.168.1.1 'umask 077; cat >>.ssh/authorized_keys'

Finalize putting freenas back in read only mode: mount -ur /

In the next step I will test, if I'm now able to mount a ZFS Storage via ISCSI with the ZFS pluggin.

With kind regards, MisterIX.
 
Hello mir and to everyone interested,

I was now successfull in mounting the ZVOL via ISCSI and ZFS-Plugin. I thereby followed the recomandations on http://pve.proxmox.com/wiki/Storage:_ZFS and chose istgt as iSCSI provider. Unfortunatly I could not create a VM though. After trying to add a test VM I received the following error:

TASK ERROR: create failed - Modification of a read-only value attempted at /usr/share/perl5/PVE/Storage/LunCmd/Istgt.pm line 288.

Strangely, the VM disk was created and visible in both Proxmox under storage content and FreeNAS in storage consumption.


Again I'm a little helpless. Has anyone had this mistake yet? Again my setup for the ZFS plugin:

ZFS_Plugin.jpg


Thank you again and with kind regards, MisterIX.
 
Last edited:
Are you sure about the error output because the line reference gives no mening in relation to the mentioned file?

Could you paste the content of the file at that line?

And also the contents of this file on the Freenas: /usr/local/etc/istgt/istgt.conf
 
Could you try do the following on every host and see if that solve your problem?

1) Create file /tmp/Fix_read_only_bug.patch with the following content
Code:
--- /usr/share/perl5/PVE/Storage/LunCmd/Istgt.pm    2014-09-10 14:21:47.000000000 +0200
+++ /tmp/Istgt.pm    2014-10-22 18:11:31.024687820 +0200
@@ -285,13 +285,14 @@
             next if (($_ =~ /^\s*#/) || ($_ =~ /^\s*$/));
             if ($_ =~ /^\s*(\w+)\s+(.+)\s*/) {
                 my $arg1 = $1;
-                $2 =~ s/^\s+|\s+$|"\s*//g;
-                if ($2 =~ /^Storage\s*(.+)/i) {
+                my $arg2 = $2;
+        $arg2 =~ s/^\s+|\s+$|"\s*//g;
+                if ($arg2 =~ /^Storage\s*(.+)/i) {
                     $SETTINGS->{$lun}->{$arg1}->{storage} = $1;
-                } elsif ($2 =~ /^Option\s*(.+)/i) {
+                } elsif ($arg2 =~ /^Option\s*(.+)/i) {
                     push @{$SETTINGS->{$lun}->{$arg1}->{options}}, $1;
                 } else {
-                    $SETTINGS->{$lun}->{$arg1} = $2;
+                    $SETTINGS->{$lun}->{$arg1} = $arg2;
                 }
             } else {
                 die "$line: parse error [$_]";
2) sudo cp /usr/share/perl5/PVE/Storage/LunCmd/Istgt.pm /usr/share/perl5/PVE/Storage/LunCmd/Istgt.pm.old
3) sudo patch /usr/share/perl5/PVE/Storage/LunCmd/Istgt.pm /tmp/Fix_read_only_bug.patch
4) sudo /usr/sbin/service pvedaemon restart
5) sudo /usr/sbin/service pveproxy restart
6) sudo /usr/sbin/service pvestatd restart

If this does not help:
1) sudo cp /usr/share/perl5/PVE/Storage/LunCmd/Istgt.pm.old /usr/share/perl5/PVE/Storage/LunCmd/Istgt.pm
2) sudo /usr/sbin/service pvedaemon restart
3) sudo /usr/sbin/service pveproxy restart
4) sudo /usr/sbin/service pvestatd restart
 
Dear mir,

thank you very much for your answer. I might say I'm a bit impressed with your patch code. It looks a little bit like an artpiece, that one could beam to a large housewall with a projector, and noone would think... :p

Anyway, I tried to apply the patch but got an error message in return:

root@v-server:/tmp# patch /usr/share/perl5/PVE/Storage/LunCmd/Istgt.pm /tmp/Fix_read_only_bug.patch

patching file /usr/share/perl5/PVE/Storage/LunCmd/Istgt.pm
Hunk #1 FAILED at 285.
1 out of 1 hunk FAILED -- saving rejects to file /usr/share/perl5/PVE/Storage/LunCmd/Istgt.pm.rej

The Istgt.pm.rej contains the code of your patch, just the first two lines differ:

--- Istgt.pm 2014-09-10 14:21:47.000000000 +0200
+++ Istgt.pm 2014-10-22 18:11:31.024687820 +0200

I guess that is not so important.

I now included the patch manually from line 285 in Istgt.pm and tried to create a virual machine on the ZFS-Storage. It resulted in an error message:

TASK ERROR: create failed - iqn.vmpool.mydomain.local: Target not found at /usr/share/perl5/PVE/Storage/LunCmd/Istgt.pm line 344.

With kind regards,

MisterIX.
 
Last edited:
To be absolutely sure nothing wrong is in the file try to replace your file with this one:
Code:
package PVE::Storage::LunCmd::Istgt;


# TODO
# Create initial target and LUN if target is missing ?
# Create and use list of free LUNs


use strict;
use warnings;
use PVE::Tools qw(run_command file_read_firstline trim dir_glob_regex dir_glob_foreach);
use Data::Dumper;


my @CONFIG_FILES = (
    '/usr/local/etc/istgt/istgt.conf',  # FreeBSD, FreeNAS
    '/var/etc/iscsi/istgt.conf'         # NAS4Free
);
my @DAEMONS = (
    '/usr/local/etc/rc.d/istgt',        # FreeBSD, FreeNAS
    '/var/etc/rc.d/istgt'               # NAS4Free
);


# A logical unit can max have 63 LUNs
# https://code.google.com/p/istgt/source/browse/src/istgt_lu.h#39
my $MAX_LUNS = 64;


my $CONFIG_FILE = undef;
my $DAEMON = undef;
my $SETTINGS = undef;
my $CONFIG = undef;
my $OLD_CONFIG = undef;


my @ssh_opts = ('-o', 'BatchMode=yes');
my @ssh_cmd = ('/usr/bin/ssh', @ssh_opts);
my @scp_cmd = ('/usr/bin/scp', @ssh_opts);
my $id_rsa_path = '/etc/pve/priv/zfs';


#Current SIGHUP reload limitations (http://www.peach.ne.jp/archives/istgt/):
#
#    The parameters other than PG, IG, and LU are not reloaded by SIGHUP.
#    LU connected by the initiator can't be reloaded by SIGHUP.
#    PG and IG mapped to LU can't be deleted by SIGHUP.
#    If you delete an active LU, all connections of the LU are closed by SIGHUP.
#    Updating IG is not affected until the next login.
#
# FreeBSD
# 1. Alt-F2 to change to native shell (zfsguru)
# 2. pw mod user root -w yes (change password for root to root)
# 3. vi /etc/ssh/sshd_config
# 4. uncomment PermitRootLogin yes
# 5. change PasswordAuthentication no to PasswordAuthentication yes
# 5. /etc/rc.d/sshd restart
# 6. On one of the proxmox nodes login as root and run: ssh-copy-id ip_freebsd_host
# 7. vi /etc/ssh/sshd_config
# 8. comment PermitRootLogin yes
# 9. change PasswordAuthentication yes to PasswordAuthentication no
# 10. /etc/rc.d/sshd restart
# 11. Reset passwd -> pw mod user root -w no
# 12. Alt-Ctrl-F1 to return to zfsguru shell (zfsguru)


sub get_base;
sub run_lun_command;


my $read_config = sub {
    my ($scfg, $timeout, $method) = @_;


    my $msg = '';
    my $err = undef;
    my $luncmd = 'cat';
    my $target;
    $timeout = 10 if !$timeout;


    my $output = sub {
    my $line = shift;
    $msg .= "$line\n";
    };


    my $errfunc = sub {
    my $line = shift;
    $err .= "$line";
    };


    $target = 'root@' . $scfg->{portal};


    my $daemon = 0;
    foreach my $config (@CONFIG_FILES) {
        $err = undef;
        my $cmd = [@ssh_cmd, '-i', "$id_rsa_path/$scfg->{portal}_id_rsa", $target, $luncmd, $config];
        eval {
            run_command($cmd, outfunc => $output, errfunc => $errfunc, timeout => $timeout);
        };
        do {
            $err = undef;
            $DAEMON = $DAEMONS[$daemon];
            $CONFIG_FILE = $config;
            last;
        } unless $@;
        $daemon++;
    }
    die $err if ($err && $err !~ /No such file or directory/);
    die "No configuration found. Install istgt on $scfg->{portal}" if $msg eq '';


    return $msg;
};


my $get_config = sub {
    my ($scfg) = @_;
    my @conf = undef;


    my $config = $read_config->($scfg, undef, 'get_config');
    die "Missing config file" unless $config;


    $OLD_CONFIG = $config;


    return $config;
};


my $parse_size = sub {
    my ($text) = @_;


    return 0 if !$text;


    if ($text =~ m/^(\d+(\.\d+)?)([TGMK]B)?$/) {
    my ($size, $reminder, $unit) = ($1, $2, $3);
    return $size if !$unit;
    if ($unit eq 'KB') {
        $size *= 1024;
    } elsif ($unit eq 'MB') {
        $size *= 1024*1024;
    } elsif ($unit eq 'GB') {
        $size *= 1024*1024*1024;
    } elsif ($unit eq 'TB') {
        $size *= 1024*1024*1024*1024;
    }
        if ($reminder) {
            $size = ceil($size);
        }
        return $size;
    } elsif ($text =~ /^auto$/i) {
        return 'AUTO';
    } else {
        return 0;
    }
};


my $size_with_unit = sub {
    my ($size, $n) = (shift, 0);


    return '0KB' if !$size;


    return $size if $size eq 'AUTO';


    if ($size =~ m/^\d+$/) {
        ++$n and $size /= 1024 until $size < 1024;
        if ($size =~ /\./) {
            return sprintf "%.2f%s", $size, ( qw[bytes KB MB GB TB] )[ $n ];
        } else {
            return sprintf "%d%s", $size, ( qw[bytes KB MB GB TB] )[ $n ];
        }
    }
    die "$size: Not a number";
};


my $lun_dumper = sub {
    my ($lun) = @_;
    my $config = '';


    $config .= "\n[$lun]\n";
    $config .=  'TargetName ' . $SETTINGS->{$lun}->{TargetName} . "\n";
    $config .=  'Mapping ' . $SETTINGS->{$lun}->{Mapping} . "\n";
    $config .=  'AuthGroup ' . $SETTINGS->{$lun}->{AuthGroup} . "\n";
    $config .=  'UnitType ' . $SETTINGS->{$lun}->{UnitType} . "\n";
    $config .=  'QueueDepth ' . $SETTINGS->{$lun}->{QueueDepth} . "\n";


    foreach my $conf (@{$SETTINGS->{$lun}->{luns}}) {
        $config .=  "$conf->{lun} Storage " . $conf->{Storage};
        $config .= ' ' . $size_with_unit->($conf->{Size}) . "\n";
        foreach ($conf->{options}) {
            if ($_) {
                $config .=  "$conf->{lun} Option " . $_ . "\n";
            }
        }
    }
    $config .= "\n";


    return $config;
};


my $get_lu_name = sub {
    my ($target) = @_;
    my $used = ();
    my $i;


    if (! exists $SETTINGS->{$target}->{used}) {
        for ($i = 0; $i < $MAX_LUNS; $i++) {
            $used->{$i} = 0;
        }
        foreach my $lun (@{$SETTINGS->{$target}->{luns}}) {
            $lun->{lun} =~ /^LUN(\d+)$/;
            $used->{$1} = 1;
        }
        $SETTINGS->{$target}->{used} = $used;
    }


    $used = $SETTINGS->{$target}->{used};
    for ($i = 0; $i < $MAX_LUNS; $i++) {
        last unless $used->{$i};
    }
    $SETTINGS->{$target}->{used}->{$i} = 1;


    return "LUN$i";
};


my $init_lu_name = sub {
    my ($target) = @_;
    my $used = ();


    if (! exists($SETTINGS->{$target}->{used})) {
        for (my $i = 0; $i < $MAX_LUNS; $i++) {
            $used->{$i} = 0;
        }
        $SETTINGS->{$target}->{used} = $used;
    }
    foreach my $lun (@{$SETTINGS->{$target}->{luns}}) {
        $lun->{lun} =~ /^LUN(\d+)$/;
        $SETTINGS->{$target}->{used}->{$1} = 1;
    }
};


my $free_lu_name = sub {
    my ($target, $lu_name) = @_;


    $lu_name =~ /^LUN(\d+)$/;
    $SETTINGS->{$target}->{used}->{$1} = 0;
};


my $make_lun = sub {
    my ($scfg, $path) = @_;


    my $target = $SETTINGS->{current};
    die 'Maximum number of LUNs per target is 63' if scalar @{$SETTINGS->{$target}->{luns}} >= $MAX_LUNS;


    my @options = ();
    my $lun = $get_lu_name->($target);
    if ($scfg->{nowritecache}) {
        push @options, "WriteCache Disable";     
    }
    my $conf = {
        lun => $lun,
        Storage => $path,
        Size => 'AUTO',
        options => @options,
    };
    push @{$SETTINGS->{$target}->{luns}}, $conf;


    return $conf->{lun};
};


my $parser = sub {
    my ($scfg) = @_;


    my $lun = undef;
    my $line = 0;


    my $config = $get_config->($scfg);
    my @cfgfile = split "\n", $config;


    foreach (@cfgfile) {
        $line++;
        if ($_ =~ /^\s*\[(PortalGroup\d+)\]\s*/) {
            $lun = undef;
            $SETTINGS->{$1} = ();
        } elsif ($_ =~ /^\s*\[(InitiatorGroup\d+)\]\s*/) {
            $lun = undef;
            $SETTINGS->{$1} = ();
        } elsif ($_ =~ /^\s*PidFile\s+"?([\w\/\.]+)"?\s*/) {
            $lun = undef;
            $SETTINGS->{pidfile} = $1;
        } elsif ($_ =~ /^\s*NodeBase\s+"?([\w\-\.]+)"?\s*/) {
            $lun = undef;
            $SETTINGS->{nodebase} = $1;
        } elsif ($_ =~ /^\s*\[(LogicalUnit\d+)\]\s*/) {
            $lun = $1;
            $SETTINGS->{$lun} = ();
            $SETTINGS->{targets}++;
        } elsif ($lun) {
            next if (($_ =~ /^\s*#/) || ($_ =~ /^\s*$/));
            if ($_ =~ /^\s*(\w+)\s+(.+)\s*/) {
                my $arg1 = $1;
                my $arg2 = $2;
		        $arg2 =~ s/^\s+|\s+$|"\s*//g;
                if ($arg2 =~ /^Storage\s*(.+)/i) {
                    $SETTINGS->{$lun}->{$arg1}->{storage} = $1;
                } elsif ($arg2 =~ /^Option\s*(.+)/i) {
                    push @{$SETTINGS->{$lun}->{$arg1}->{options}}, $1;
                } else {
                    $SETTINGS->{$lun}->{$arg1} = $arg2;
                }
            } else {
                die "$line: parse error [$_]";
            }
        }
        $CONFIG .= "$_\n" unless $lun;
    }


    $CONFIG =~ s/\n$//;
    die "$scfg->{target}: Target not found" unless $SETTINGS->{targets};
    my $max = $SETTINGS->{targets};
    my $base = get_base;


    for (my $i = 1; $i <= $max; $i++) {
        my $target = $SETTINGS->{nodebase}.':'.$SETTINGS->{"LogicalUnit$i"}->{TargetName};
        if ($target eq $scfg->{target}) {
            my $lu = ();
            while ((my $key, my $val) = each(%{$SETTINGS->{"LogicalUnit$i"}})) {
                if ($key =~ /^LUN\d+/) {
                    $val->{storage} =~ /^([\w\/\-]+)\s+(\w+)/;
                    my $storage = $1;
                    my $size = $parse_size->($2);
                    my $conf = undef;
                    my @options = ();
                    if ($val->{options}) {
                        @options = @{$val->{options}};
                    }
                    if ($storage =~ /^$base\/$scfg->{pool}\/([\w\-]+)$/) {
                        $conf = {
                            lun => $key,
                            Storage => $storage,
                            Size => $size,
                            options => @options,
                        }
                    }
                    push @$lu, $conf if $conf;
                    delete $SETTINGS->{"LogicalUnit$i"}->{$key};
                }
            }
            $SETTINGS->{"LogicalUnit$i"}->{luns} = $lu;
            $SETTINGS->{current} = "LogicalUnit$i";
            $init_lu_name->("LogicalUnit$i");
        } else {
            $CONFIG .= $lun_dumper->("LogicalUnit$i");
            delete $SETTINGS->{"LogicalUnit$i"};
            $SETTINGS->{targets}--;
        }
    }
    die "$scfg->{target}: Target not found" unless $SETTINGS->{targets} > 0;
};


my $list_lun = sub {
    my ($scfg, $timeout, $method, @params) = @_;
    my $name = undef;


    my $object = $params[0];
    for my $key (keys %$SETTINGS)  {
        next unless $key =~ /^LogicalUnit\d+$/;
        foreach my $lun (@{$SETTINGS->{$key}->{luns}}) {
            if ($lun->{Storage} =~ /^$object$/) {
                return $lun->{Storage};
            }
        }
    }


    return $name;
};


my $create_lun = sub {
    my ($scfg, $timeout, $method, @params) = @_;
    my $res = ();
    my $file = "/tmp/config$$";


    if ($list_lun->($scfg, $timeout, $method, @params)) {
        die "$params[0]: LUN exists";
    }
    my $lun = $params[0];
    $lun = $make_lun->($scfg, $lun);
    my $config = $lun_dumper->($SETTINGS->{current});
    open(my $fh, '>', $file) or die "Could not open file '$file' $!";


    print $fh $CONFIG;
    print $fh $config;
    close $fh;
    @params = ($CONFIG_FILE);
    $res = {
        cmd => 'scp',
        method => $file,
        params => \@params,
        msg => $lun,
        post_exe => sub {
            unlink $file;
        },
    };


    return $res;
};


my $delete_lun = sub {
    my ($scfg, $timeout, $method, @params) = @_;
    my $res = ();
    my $file = "/tmp/config$$";


    my $target = $SETTINGS->{current};
    my $luns = ();


    foreach my $conf (@{$SETTINGS->{$target}->{luns}}) {
        if ($conf->{Storage} =~ /^$params[0]$/) {
            $free_lu_name->($target, $conf->{lun});
        } else {
            push @$luns, $conf;
        }
    }
    $SETTINGS->{$target}->{luns} = $luns;


    my $config = $lun_dumper->($SETTINGS->{current});
    open(my $fh, '>', $file) or die "Could not open file '$file' $!";


    print $fh $CONFIG;
    print $fh $config;
    close $fh;
    @params = ($CONFIG_FILE);
    $res = {
        cmd => 'scp',
        method => $file,
        params => \@params,
        post_exe => sub {
            unlink $file;
            run_lun_command($scfg, undef, 'add_view', 'restart');
        },
    };


    return $res;
};


my $import_lun = sub {
    my ($scfg, $timeout, $method, @params) = @_;


    my $res = $create_lun->($scfg, $timeout, $method, @params);


    return $res;
};


my $add_view = sub {
    my ($scfg, $timeout, $method, @params) = @_;
    my $cmdmap;


    if (@params && $params[0] eq 'restart') {
        @params = ('onerestart', '>&', '/dev/null');
        $cmdmap = {
            cmd => 'ssh',
            method => $DAEMON,
            params => \@params,
        };
    } else {
        @params = ('-HUP', '`cat '. "$SETTINGS->{pidfile}`");
        $cmdmap = {
            cmd => 'ssh',
            method => 'kill',
            params => \@params,
        };
    }


    return $cmdmap;
};


my $modify_lun = sub {
    my ($scfg, $timeout, $method, @params) = @_;


    # Current SIGHUP reload limitations
    # LU connected by the initiator can't be reloaded by SIGHUP.
    # Until above limitation persists modifying a LUN will require
    # a restart of the daemon breaking all current connections
    #die 'Modify a connected LUN is not currently supported by istgt';
    @params = ('restart', @params);


    return $add_view->($scfg, $timeout, $method, @params);
};


my $list_view = sub {
    my ($scfg, $timeout, $method, @params) = @_;
    my $lun = undef;


    my $object = $params[0];
    for my $key (keys %$SETTINGS)  {
        next unless $key =~ /^LogicalUnit\d+$/;
        foreach my $lun (@{$SETTINGS->{$key}->{luns}}) {
            if ($lun->{Storage} =~ /^$object$/) {
                if ($lun->{lun} =~ /^LUN(\d+)/) {
                    return $1;
                }
                die "$lun->{Storage}: Missing LUN";
            }
        }
    }


    return $lun;
};


my $get_lun_cmd_map = sub {
    my ($method) = @_;


    my $cmdmap = {
        create_lu   => { cmd => $create_lun },
        delete_lu   => { cmd => $delete_lun },
        import_lu   => { cmd => $import_lun },
        modify_lu   => { cmd => $modify_lun },
        add_view    => { cmd => $add_view },
        list_view   => { cmd => $list_view },
        list_lu     => { cmd => $list_lun },
    };


    die "unknown command '$method'" unless exists $cmdmap->{$method};


    return $cmdmap->{$method};
};


sub run_lun_command {
    my ($scfg, $timeout, $method, @params) = @_;


    my $msg = '';
    my $luncmd;
    my $target;
    my $cmd;
    my $res;
    $timeout = 10 if !$timeout;
    my $is_add_view = 0;


    my $output = sub {
    my $line = shift;
    $msg .= "$line\n";
    };


    $target = 'root@' . $scfg->{portal};


    $parser->($scfg) unless $SETTINGS;
    my $cmdmap = $get_lun_cmd_map->($method);
    if ($method eq 'add_view') {
        $is_add_view = 1 ;
        $timeout = 15;
    }
    if (ref $cmdmap->{cmd} eq 'CODE') {
        $res = $cmdmap->{cmd}->($scfg, $timeout, $method, @params);
        if (ref $res) {
            $method = $res->{method};
            @params = @{$res->{params}};
            if ($res->{cmd} eq 'scp') {
                $cmd = [@scp_cmd, '-i', "$id_rsa_path/$scfg->{portal}_id_rsa", $method, "$target:$params[0]"];
            } else {
                $cmd = [@ssh_cmd, '-i', "$id_rsa_path/$scfg->{portal}_id_rsa", $target, $method, @params];
            }
        } else {
            return $res;
        }
    } else {
        $luncmd = $cmdmap->{cmd};
        $method = $cmdmap->{method};
        $cmd = [@ssh_cmd, '-i', "$id_rsa_path/$scfg->{portal}_id_rsa", $target, $luncmd, $method, @params];
    }


    eval {
        run_command($cmd, outfunc => $output, timeout => $timeout);
    };
    if ($@ && $is_add_view) {
        my $err = $@;
        if ($OLD_CONFIG) {
            my $err1 = undef;
            my $file = "/tmp/config$$";
            open(my $fh, '>', $file) or die "Could not open file '$file' $!";
            print $fh $OLD_CONFIG;
            close $fh;
            $cmd = [@scp_cmd, '-i', "$id_rsa_path/$scfg->{portal}_id_rsa", $file, $CONFIG_FILE];
            eval {
                run_command($cmd, outfunc => $output, timeout => $timeout);
            };
            $err1 = $@ if $@;
            unlink $file;
            die "$err\n$err1" if $err1;
            eval {
                run_lun_command($scfg, undef, 'add_view', 'restart');
            };
            die "$err\n$@" if ($@);
        }
        die $err;
    } elsif ($@) {
        die $@;
    } elsif ($is_add_view) {
        $OLD_CONFIG = undef;
    }


    if ($res->{post_exe} && ref $res->{post_exe} eq 'CODE') {
        $res->{post_exe}->();
    }


    if ($res->{msg}) {
        $msg = $res->{msg};
    }


    return $msg;
}


sub get_base {
    return '/dev/zvol';
}


1;
 
Hi mir,

I also corrected my target on the freenas to be exactly conform to specification: iqn.2014-10.local.domain.my:storage because I saw a warning message in my freenas console. I tested with and without the new Istgt.pm but the errors seem to be consistend.

With the new Istgt.pm I setup the ZFS Plugins in two different ways with two different results by changing the entry for "Pool:". Once I chose MyRaidZ2 (the highest level) because in the past I had seen "Parent is no directory" mistakes, when I added the correct zvol here, where the iSCSI actually points to. Then I tried adding the zvol name here.

With the new Istgt.pm I received two different errors, depending on adding the zvol name or not:

without adding the zvol name I received: TASK ERROR: create failed - iqn.2014-10.local.my.domain:storage: Target not found at /usr/share/perl5/PVE/Storage/LunCmd/Istgt.pm line 392.

with the zvol name I recieved : cannot create 'WEDORaidZ2/STORAGE/vm-100-disk-1': parent is not a filesystem
TASK ERROR: create failed - command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/192.168.1.1_id_rsa root@192.168.1.1 zfs create -s -b 4k -V 524288000k MyRaidZ2/STORAGE/vm-100-disk-1' failed: exit code 1

The content of the /usr/local/etc/istgt/istgt.conf is attached as a textfile.

Thank you again, MisterIX.

View attachment istgt.conf.txt
 
Last edited:
I found out what is wrong. Apart from the changes to the code there is also an error in your istgt.conf file.
1) You have: NodeBase "iqn.mydisk1.mydomain.local" Should be: NodeBase "iqn.2014-10.local.my.domain"
2) You have: TargetName "iqn.2014-10.local.my.domain:storage" Should be: TargetName "storage"

The Target is comprised by NodeBase + ':' + TargetName

If you make the corrections above to your istgt.conf file it should work.
 
get the following error:

command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/10.1.0.252_id_rsa root@10.1.0.252 zfs get -o value -Hp available,used proxvol1' failed: exit code 1
 
Hello Norman,

I didn't really get it to work and from the time aspect I chose to use normal iscsi connections for the pure data volumes and an nfs share for the virtual harddisks. I still had trouble though with the nfs share. I had chosen a virtio hdd on qcow format with buffer set to write through. That caused minimal different behaviour of my Ubuntu Linux machine in memory management, so I recommend local storage or iSCSI. It took me weeks to find out what caused my server to crash regularly.

Kind regards, Jens.
 
My current setup has 2 nodes and a FreeNAS box.
All are connected over LACP two nics's bonded.
Currently the ZFS plugin is not working for me, so I am running everything over NFS.
I have 3 KVM VM's running (two win2003 servers and 1 Asterisk phone server) and two OpenVZ instances. (one of which has Oracle running in it).

Offcourse for the KVM machines I am not currently using all the features that can be used when using the ZFS storage. (snapshots feature of the ZFS filesystem)
And maybe also a speed increase as I think KVM would maybe run faster over iSCSI.

Is there someway we can debug the module together to get it to work also which FreeNAS/FreeBSD?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!