Proxmox 7.1, iSCSI, chap, HPE MSA and problems

sterua32

Member
Nov 18, 2022
6
0
6
Hello everyone!

I encounter problem with setting iscsi and chap on my proxmox server.

I had a disk bay HPE MSA1060 where a set a record for chap authentication:
initiatorname : the iqn find on the proxmox server in /etc/iscsi/initiatorname.iscsi
password: test1234test

on the proxmox server, in /etc/iscsi/iscsi.conf i set :
# To enable CHAP authentication set node.session.auth.authmethod # to CHAP. The default is None. node.session.auth.authmethod = CHAP # To configure which CHAP algorithms to enable set # node.session.auth.chap_algs to a comma seperated list. # The algorithms should be listen with most prefered first. # Valid values are MD5, SHA1, SHA256, and SHA3-256. # The default is MD5. #node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 # To set a CHAP username and password for initiator # authentication by the target(s), uncomment the following lines: node.session.auth.username = username node.session.auth.password = test1234test # To set a CHAP username and password for target(s) # authentication by the initiator, uncomment the following lines: #node.session.auth.username_in = username_in #node.session.auth.password_in = password_in # To enable CHAP authentication for a discovery session to the target # set discovery.sendtargets.auth.authmethod to CHAP. The default is None. discovery.sendtargets.auth.authmethod = CHAP # To set a discovery session CHAP username and password for the initiator # authentication by the target(s), uncomment the following lines: discovery.sendtargets.auth.username = username discovery.sendtargets.auth.password = test1234test

when i try a lsscsi, it dit not find my hpe msa.
when i try a iscsiadm -m node --portal "192.168.0.1" --login it returns:
Logging in to [iface: default, target: iqn.xxx.hpe:storage.msa1060.xxxxxxx, portal: 192.168.0.1,3260] iscsiadm: Could not login to [iface: default, target: iqn.xxxxx.hpe:storage.msa1060.xxxxxxxx, portal: 192.168.0.1,3260]. iscsiadm: initiator reported error (24 - iSCSI login failed due to authorization failure) iscsiadm: Could not log into all portals
(i change the ip for the post, and change the iqn)

so, i dont know how to make it works...
Someone can help me ?

When i try to unset chap on my hpe msa, and comment all the line in /etc/iscsi/iscsi.conf for the chap auth, lsscsi find the volumes from my hpe msa, and the iscsiadm command return no error.
So, there is something i missed for chap, but i dont know what...
 
when i try a iscsiadm -m node --portal "192.168.0.1" --login it returns:
Logging in to [iface: default, target: iqn.xxx.hpe:storage.msa1060.xxxxxxx, portal: 192.168.0.1,3260] iscsiadm: Could not login to [iface: default, target: iqn.xxxxx.hpe:storage.msa1060.xxxxxxxx, portal: 192.168.0.1,3260]. iscsiadm: initiator reported error (24 - iSCSI login failed due to authorization failure) iscsiadm: Could not log into all portals
(i change the ip for the post, and change the iqn)
The error message is likely correct and your authentication material is wrong. "iscsiadm" is part of standard Linux package and is not modified by Proxmox in any way. Hence the issue you are having is strictly between HPE and Linux.

Are you sure that username is "username" ? Have you properly permissioned the HPE side to allow PVE IQN to login? What does HPE log say?
Have you consulted HPE document that guides you on the steps of connecting a Linux host to their storage?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I already have read this document:
https://www.hpe.com/psnow/doc/a00105312en_us
I know it is not spefically related to proxmox, but, i post here thinking that i could find help here because there is more system administrator here than on linux general forum...
for the username, i dont find any information on it... When i seek for more information, i find some people saying that this username is not restrictive, and we can set what we want, the most important think is that the password is good, and the chat record is set on the HPE storage with the correct iqn of the initiator (the proxmox server).
I dont find the log for hpe but i try. I post in the HPE forum too, but, i had just one response from a bot asking if i need help ...
Anyone did connect a HPE storage to a Linux host or to a proxmox server?
In any case, thanks for your future answer, and thanks @bbgeek17 for your answers!
 
I got there, finally, all that's missing is the multipath test

For those who have the same problem, to get the logs, you have to go to the bay's web administration interface, then support, enter the information, do "collect logs", wait several minutes, and there it downloads a .zip file with lots of logs

Which I made a script to look for "chap failed" entries:
Code:
for i in $(find . -type f -print)
do
        echo $i
        grep -i "chap failed" $i
done
Which returned errors with the affected files.

I could see that my different username attempts corresponded to [0]##########Chap failed - No DB Entry for username

On the other hand, I noticed that when I changed the username in the iscid.conf file, it did not change the username during the tests, because it always remained the same in the logs...

I searched, and found: you have to manually modify the files in nodes/iqn...../@ip_port_iscsi/default and in send_targets/@ip_portal/iqn.. .@ip_port/st_config

So, the username needed is in fact the iqn of the initiator (iqn given in /etc/iscsi/initiatorname.iscsi)

And there, an systemctl restart open-iscsi.service or an iscsiadm -m node --portal "@ip_portail" --login works!

You can check with an lsscsi to see the disks shared by the array (you have to install lsscsi before you can use it)
 
Last edited:
Hello everyone.
I managed to enable Chap authentication between the array and the proxmox servers and I can clearly see the volumes mapped on the servers.
However, I can't create an lvm on it...
As soon as I do a vgcreate vg_name /dev/mapper/volume_hpe
then a pvs, I have errors:
Code:
WARNING: Metadata location on /dev/mapper/volume_hpe at 4608 begins with invalid VG name.
WARNING: bad metadata text on /dev/mapper/volume_hpe in mda1
WARNING: scanning /dev/mapper/volume_hpe mda1 failed to read metadata summary.
WARNING: repair VG metadata on /dev/mapper/volume_hpe with vgck --updatemetadata.
WARNING: scan failed to get metadata summary from /dev/mapper/volume_hpe PVID oWtXNViRKrjasEzsvD1RQJ94fhLV65CD
WARNING: PV /dev/mapper/volume_hpe is marked in use but no VG was found using it.
WARNING: PV /dev/mapper/volume_hpe might need repairing.
and by default vgcreate does not offer me the path /dev/mapper/volume_hpe ... only the path /dev/sda (local disk of the servers, where there is already the data of the os proxmox systems)
What's weird is that after I do the pvcreate, when I do the vgcreate vg-name, the auto-completion doesn't even offer me the volume initialized by the pvcreate...
Some of my ancient colleague said me "When you have set the open-iscsi on proxmox, go connect the lun by the web interface", but, i read on some forums that we had to do this by the web interface for the ancients version of proxmox, but not longer with the 7.X version of proxmox. Starting with the 7.x (or even 6.x), we need to set it on the shell, but i dont understand what i do wrong...
If you have some tips for me, that would be great ^^
thanks in advance
 
So i try few things.
I delete the volume from my hpe, and create a new one.
On the web gui, on datacenter, storage, i add an iscsi storage, name iscsi_test, portal: @ip_portal, target: the target (the iqn of the hpe volume), check the box enabled and uncheck use directly.
After that, a try, from the web gui, make a lvm, base: the iscsi set before, base volume: the lun, check enable and chek share, and an error occured saying that i cannot create the volume.
So after that, i try in shell pvesm add lvm lvm_test --vgname vg_lvm_test --base iscsi_test_28_11:0.0.1.scsi-3600c0ff000668360f3f3846301000000 --shared yes --content images, it works, but, i cant use the storage...
 

Attachments

  • iscsi_lvm_3.png
    iscsi_lvm_3.png
    22.8 KB · Views: 14
  • iscsi_lvm_2.png
    iscsi_lvm_2.png
    2.9 KB · Views: 14
  • iscsi_lvm_1.png
    iscsi_lvm_1.png
    23 KB · Views: 14

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!