iSCSI storage setup

Jul 3, 2020
23
1
8
47
Hi,

After working all day on this matter and resorting to the forum for getting a solution, I made progress during writing this forum post. I guess it's still valuable for people searching for solutions to the problems I had :). That's why I moved the remaining relevant paragraphs to the top and hid the original posting behind a spoiler element.

What I still don't get is ZFS over iSCSI. At which stage do I provide the filesystem?

My setup: My backupserver1 has an encrypted lvm. I use about 40 GiByte for the operating system, so there are plenty of TiByte unused in the volume group. I created a logical volume which I used as a physical volume for another volume group iscsi-targets. In this VG I created a logical volume that serves as a target for this specific Proxmox VE server to use as a backup.

Do I have to put the ZFS filesystem on the logical volume on the iSCSI server or on the iSCSI client?

Thanks in advance!

I feel a bit stupid at the moment. I want to add a backup storage to PVE. After some research I decided on iSCSI or ZFS over iSCSI but I first wanted to get iSCSI working. And that's the crux.

I think I successfully configured the target; at least, iscsiadm -m discovery -t st -p <ip> and pvesm scan iscsi <ip> show my target.

But it doesn't connect, and I guess it's all about these lines in the portal's journal:
Aug 25 14:32:03 backupserver1 kernel: rx_data returned 0, expecting 48.
Aug 25 14:32:03 backupserver1 kernel: iSCSI Login negotiation failed.
Aug 25 14:32:03 backupserver1 kernel: rx_data returned 0, expecting 48.
Aug 25 14:32:03 backupserver1 kernel: iSCSI Login negotiation failed.
Aug 25 14:32:03 backupserver1 kernel: iSCSI Initiator Node: iqn.1993-08.org.debian:01:7cff9a66f22 is not authorized to access iSCSI target portal group: 1.
Aug 25 14:32:03 backupserver1 kernel: iSCSI Login negotiation failed.
Of course, the iscsi initiator name is not what I configured on the iscsi target. But I didn't find anything in PVE's web UI regarding this parameter nor did I find anything in pvesm. I changed the entry in /etc/iscsi/initiatorname.iscsi but restarting with systemctl restart open-iscsi.service does not work. Actually, it exits with code 21 which the man page describes as
ISCSI_ERR_NO_OBJS_FOUND - no records/targets/sessions/portals found to execute operation on.

But in /etc/iscsi/nodes I can find my node definition I setup using the web UI. The existence of /etc/iscsi/iscsid.conf made me suspicious if there was another service running, and that was indeed the case: iscsid.service. Restarting this resulted in the portal's journal showing:

Aug 25 14:44:13 backupserver1 kernel: rx_data returned 0, expecting 48.
Aug 25 14:44:13 backupserver1 kernel: iSCSI Login negotiation failed.
Aug 25 14:44:13 backupserver1 kernel: rx_data returned 0, expecting 48.
Aug 25 14:44:13 backupserver1 kernel: iSCSI Login negotiation failed.
Aug 25 14:44:13 backupserver1 kernel: iSCSI/iqn.2011-01.com.<example>.<servername>:<password>: Unsupported SCSI Opcode 0xa3, sending CHECK_CONDITION.

I remembered there was a ":01" between the username and the password in the default name. Removing this yielded a better result, the message with the unsupported SCSI Opcode 0xa3 disappeared.

But I still get these every 10 seconds:
Aug 25 14:52:23 backupserver1 kernel: rx_data returned 0, expecting 48.
Aug 25 14:52:23 backupserver1 kernel: iSCSI Login negotiation failed.
Also, I now have another device on my PVE, pvesm status reports my iSCSI target as active. Why do I still get error messages? https://forum.proxmox.com/threads/iscsi-login-negotiation-failed.41187/ seems to describe my problem. Is there no better way? Testing a nmap -p 3260 <ip> also shows the host up and running and the port open. Or please check health using a valid login that does not spam the syslog.
 
After now having a working iSCSI target and PVE storage I think I overcomplicated this. If I am right I should have exported the LV I used as a PV for the nested VG. Then, PVE could simply manage the VG in there.
 
Still working on it. As far as I understand, the documentation is lacking in this regard. It's late now, but I'll put some stuff together for other poor admins. Tomorrow.
 
How to configure ZFS over iSCSI using LIO and targetcli:

Requirements:
  • Initiator/client needs SSH access to target/server
  • ZFS pool and filesystem on target/server
As far as I understand it, PVE uses SSH access as a control channel to create ZFS datasets and shares them as iSCSI LUNs.

First, generate an SSH key on the client:
Bash:
TARGET_IP=10.20.30.40
mkdir -p /etc/pve/priv/zfs
ssh-keygen -f /etc/pve/priv/zfs/$TARGET_IP_id_rsa

It is important that the target IP address is part of the filename as PVE uses this to decide which key to use for ssh access. Copy this newly generated public ssh key to your target server:
Bash:
ssh-copy-id -i /etc/pve/priv/zfs/$TARGET_IP_id_rsa.pub root@$TARGET_IP
Test it:
Bash:
ssh -i /etc/pve/priv/zfs/$TARGET_IP_id_rsa.pub root@$TARGET_IP

I used these instructions as a reference. The image at the bottom was really helpful later in the process.

On the target server side have a block device available. I have a volume group, so I could just create another logical volume and create a ZFS pool in it:
Bash:
lvcreate -n <name of LV> -L 12TB <name of VG>
zpool create -f <name of ZFS pool>
zfs create -o compression=off -o dedup=off -o volblocksize=32K -V 13000G <name of ZFS pool>/<name of ZFS dataset>
zfs set sync=disabled <name of ZFS pool>/<name of ZFS dataset>

I found these instructions on a page that describes to use ZFS over iSCSI for ESXi but I guess PVE can profit from that, too. The instructions on creating the iSCSI target are a bit short but the Debian wiki saves the day as I used Debian Buster as the server OS. There are many tutorials for creating iSCSI targets using the package targetcli but nowadays the distributions seem to provide targetcli-fb as default (take this with a grain of salt). There are some slight differences between those two, mainly in the location of some settings in the config tree.

Before targetcli gets started find out the device name of the ZFS dataset just created:
Bash:
ls -l /dev/zvol/<name of ZFS pool>/<name of ZFS dataset>
The instructions strangely used this path instead of the referenced device. But whatever! It works as fas as I can tell.

Then start targetcli. targetcli is an interactive shell to configure LIO targets. The commands on the above linked Debian wiki page have to be entered there. For the sake of completeness I will repeat them here:
Bash:
cd backstores/block
create <blockdevicename> <path of ZFS dataset>
cd /iscsi
create iqn.<yyyy-mm>.<tld>.<domain>.<servername>:<target name>
cd <IQN>/tgp1/luns
create /backstores/block/<blockdevicename>
cd ../acls
create iqn.<yyyy-mm>.<tld>.<domain>.<clientname>:<password>
cd ../portals
delete 0.0.0.0 3260
create <TARGET_IP> 3260
exit
Debian Buster provides targetcli-fb which is why some paths are different and some commands are missing.

One comment regarding the iSCSI Qualified Name (IQN): It consists of a few parts that might be of interest. An IQN always starts with "iqn", followed by the date your domain has been taken control of. Then comes the domain part in reverse notation, followed by the servername. All these are separated by dots. The target's name, lastly, is appended and separated by a colon. In the listing above, there are two IQNs: One for the target and one for the initiator. I just assumed that you will change the initiator name to something matching your company's naming guidelines but the system usually generates a default name using iscsi-iname, e.g. iqn.2005-03.org.open-iscsi:3243ee34dd8. Go with the defaults if it doesn't matter for you. If you want to change it, edit /etc/iscsi/initiatorname.iscsi and systemctl restart iscsid.service (or whatever init system you use).

Finally, it's time to tell PVE of the ZFS over iSCSI target. I already mentioned the image in the first linked page, so you might take a look at it. Go to datacenter > Storage and Add ZFS over iSCSI.
  1. Start by selecting the proper iSCSI provider as this decides which fields to fill. PVE devs: You should put this at the top or at least following the ID form field.
  2. Fill the ID field. That's a name of your choice. It just has to adhere PVE naming conventions.
  3. For the IP enter the iSCSI server's IP address used to connect via SSH. There might be some problems if SSH access and iSCSI access use different IPs but I leave that to you to figure out as you seem to like challenges.
  4. For the pool, enter the name of the dataset. Yes, it says pool but means "pool/dataset". Whatever.
  5. For the blocksize I used the 32k I set the volblocksize when I created the dataset. Maybe someone else can tell us more about this.
  6. Then enter the target name you set when creating the iSCSI target.
  7. The target portal group is tgp1 by default but if it's different it's the name just below the iSCSI target IQN in the config tree.
  8. Enable thin provision.
  9. Enable the storage
  10. Select which nodes should have access to this storage.
Only 10 steps. Wow.

Maybe these instructions might be of help for someone looking to configure ZFS over iSCSI using a simple Linux box (LIO) and targetcli.
 
  • Like
Reactions: guletz
Hi,

Anyway, congratulation for your post. I like a lot peoples who share their works with others.

Now some observations:

- zfs over iscsi is dangerous without iscsi header and data checks, especially if your goal is to use as a backup (restore of a buckup must not fail in any case)
- also zfs over iscsi with sync=disable is also bad
- I have test this scenario some years ago, and is not very realible (test with disable your network port, reset your client/server and see the what happen)

More realible is to use at least 2 different iscsi hosts and create a zfs mirror from them. Much better as space use is to have 3 iscsi servers and to use a raidz5.


Good luck / Bafta!
 
  • Like
Reactions: MasinAD

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!