[SOLVED] ZFS over iscsi auth issues

May 28, 2020
32
10
13
42
South Africa
Hi all,

Running a PVE7 cluster with 9 nodes.

Am in the process of setting up block storage on the cluster to a Pure flash array.

I have the iscsi connectivity in place and working

My issue is that proxmox is trying to connect to the Pure appliance using 'root' user which it is not allowed by the appliance manufacturer:

root@proxmox-host-08:/# /usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/100.64.10.47_id_rsa root@100.64.10.47
root@100.64.10.47: Permission denied (publickey,password).
root@proxmox-host-08:/#


Using user created on pure appliance:

root@proxmox-host-08:/# /usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/100.64.10.47_id_rsa block@100.64.10.47
Last login: Thu Jul 22 11:20:21 2021 from 10.212.134.18

Thu Jul 22 15:46:24 2021
Welcome block. This is Purity Version 6.1.6 on FlashArray pure-poc
http://www.purestorage.com/
block@pure-poc> ^C


Does anyone know how I would be able to get the system to natively connect using a custom user and not root?

Thanks in advance!
 
May 28, 2020
32
10
13
42
South Africa
Error from syslog:

Jul 22 15:38:57 proxmox-host-08 pvestatd[5283]: command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/100.64.10.47_id_rsa root@100.64.10.47 zfs get -o value -Hp available,used rpool' failed: exit code 255
 

bbgeek17

Famous Member
Nov 20, 2020
1,296
264
88
Blockbridge
www.blockbridge.com
ZFS over iSCSI storage is only available for Linux/BSD hosts.

Its literally:
login as privileged (mostly root) user to Linux host that has ZFS pool
create a slice
export it via supported iscsi daemon

Your pure storage does not fit any of the above, you are limited to LVM over iscsi, or direct LUNs
 
Last edited:

lowerym

Member
Feb 17, 2021
35
2
8
38
bbgeek17,
Would you mind sharing your multipathd.conf for the pure storage? i am configuring one myself and multipath is not my strong suit.

Thanks
 
May 28, 2020
32
10
13
42
South Africa
Sure thing:

joe /etc/multipath.conf

defaults {
user_friendly_names no
find_multipaths "yes"
}
devices {
device {
vendor "PURE"
product "FlashArray"
path_grouping_policy group_by_prio
hardware_handler "1 alua"
prio alua
failback "immediate"
fast_io_fail_tmo 10
}
}
 
  • Like
Reactions: lowerym

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!