pve4b2 iscsiadm exit 15 wondering why

stefws

Renowned Member
Jan 29, 2015
302
4
83
Denmark
siimnet.dk
I'm trying to built a pve 4 beta 2 test cluster with iscsi + lvm and see these regular in my logs even before creating any images in the VG and wondering why:

Sep 29 03:56:13 n1 pvedaemon[29114]: command '/usr/bin/iscsiadm --mode node --targetname iqn.1986-03.com.hp:storage.msa1040.151725e557 --login' failed: exit code 15Sep 29 03:56:46 n1 pvedaemon[39090]: command '/usr/bin/iscsiadm --mode node --targetname iqn.1986-03.com.hp:storage.msa1040.151725e557 --login' failed: exit code 15
Sep 29 03:57:20 n1 pvedaemon[39090]: command '/usr/bin/iscsiadm --mode node --targetname iqn.1986-03.com.hp:storage.msa1040.151725e557 --login' failed: exit code 15
Sep 29 03:57:53 n1 pvedaemon[29114]: command '/usr/bin/iscsiadm --mode node --targetname iqn.1986-03.com.hp:storage.msa1040.151725e557 --login' failed: exit code 15
Sep 29 03:58:27 n1 pvedaemon[27368]: command '/usr/bin/iscsiadm --mode node --targetname iqn.1986-03.com.hp:storage.msa1040.151725e557 --login' failed: exit code 15

root@n1:~# iscsiadm -m session

tcp: [1] 10.2.0.2:3260,2 iqn.1986-03.com.hp:storage.msa1040.151725e557 (non-flash)
tcp: [2] 10.2.0.1:3260,1 iqn.1986-03.com.hp:storage.msa1040.151725e557 (non-flash)
tcp: [3] 10.2.1.1:3260,3 iqn.1986-03.com.hp:storage.msa1040.151725e557 (non-flash)
tcp: [4] 10.2.1.2:3260,4 iqn.1986-03.com.hp:storage.msa1040.151725e557 (non-flash)

root@n1:~# iscsiadm -m node --targetname iqn.1986-03.com.hp:storage.msa1040.151725e557 --login

iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: Could not log into all portals


Also wondering what people do for BCP on size+number of LUNs vs sessions per link (from iscsid.conf):
...
# For multipath configurations, you may want more than one session to be
# created on each iface record. If node.session.nr_sessions is greater
# than 1, performing a 'login' for that node will ensure that the
# appropriate number of sessions is created.
node.session.nr_sessions = 1
...

I'm thinking of doing a reasonable number of 100GB LUNs (50-200), like if I might have with physical spindles. Thus having more IO queues to spread more parallel IOPs across (like with BCP for zfs on Solaris for Oracle rdbms). And every lun/device is then multipathed over 4x iscsi sessions/paths over 10Gbs links (one session per physical link). Shouldn't this be just as good as multipathed 8Gbs FC HBAs? But if I increase nr_sessions I just get more sessions per link, dunno if that's overkill with multiple paths over same physical link... any thoughts on this?
 
Last edited by a moderator:
Also I find this to good to comment out:

root@n1:~# cat /lib/udev/rules.d/60-multipath.rules
#
# udev rules for multipathing.
# The persistent symlinks are created with the kpartx rules
#


# Coalesce multipath devices before multipathd is running (initramfs, early
# boot)
#ACTION=="add|change", SUBSYSTEM=="block", RUN+="/sbin/multipath -v0 /dev/$name"

as per
https://access.redhat.com/documenta...Linux/7-Beta/html/DM_Multipath/many_luns.html
otherwise you might get hurt by CPU load when ever a path changes
 
Last edited by a moderator:
Any hints on why PVE seems to continue to attempt to do iscsi logins when nodes got what seems constant sessions around?

It seems every almost every action touching a iscsi target, like creating, moving or cloning vm disk images, also attempts logins that fails, but actions are still performed successfully, just lot of noise in logs and in task outputs giving causes for concerns.
 
Hello,

I just install Proxmox VE 4 stable release and I've got the exact same issue with a Dell 3600i NAS.
When I try to reinstall a VM from a Proxmox v3 backup, it's hanging up with kernel errors.
Same configuration is OK with Proxmox v3 .

Any ideas ?
 
Hi,

thanks for your workarround.
In my case, I don't use multipath. I only have a single 10Gbe link between nodes and iscsi NAS.
I don't meet delay. just error messages like:
Sep 29 03:58:27 n1 pvedaemon[27368]: command '/usr/bin/iscsiadm --mode node --targetname iqn.1986-03.com.hp:storage.msa1040.151725e557 --login' failed: exit code 15


I'd prefer to use GUI features provided by proxmox and stay with iscsi+LVM and wait for a solution.
I don't understand why we meet this issue.

It seems to be a iscsiadm issue.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!