I'm trying to built a pve 4 beta 2 test cluster with iscsi + lvm and see these regular in my logs even before creating any images in the VG and wondering why:
Sep 29 03:56:13 n1 pvedaemon[29114]: command '/usr/bin/iscsiadm --mode node --targetname iqn.1986-03.com.hp:storage.msa1040.151725e557 --login' failed: exit code 15Sep 29 03:56:46 n1 pvedaemon[39090]: command '/usr/bin/iscsiadm --mode node --targetname iqn.1986-03.com.hp:storage.msa1040.151725e557 --login' failed: exit code 15
Sep 29 03:57:20 n1 pvedaemon[39090]: command '/usr/bin/iscsiadm --mode node --targetname iqn.1986-03.com.hp:storage.msa1040.151725e557 --login' failed: exit code 15
Sep 29 03:57:53 n1 pvedaemon[29114]: command '/usr/bin/iscsiadm --mode node --targetname iqn.1986-03.com.hp:storage.msa1040.151725e557 --login' failed: exit code 15
Sep 29 03:58:27 n1 pvedaemon[27368]: command '/usr/bin/iscsiadm --mode node --targetname iqn.1986-03.com.hp:storage.msa1040.151725e557 --login' failed: exit code 15
root@n1:~# iscsiadm -m session
tcp: [1] 10.2.0.2:3260,2 iqn.1986-03.com.hp:storage.msa1040.151725e557 (non-flash)
tcp: [2] 10.2.0.1:3260,1 iqn.1986-03.com.hp:storage.msa1040.151725e557 (non-flash)
tcp: [3] 10.2.1.1:3260,3 iqn.1986-03.com.hp:storage.msa1040.151725e557 (non-flash)
tcp: [4] 10.2.1.2:3260,4 iqn.1986-03.com.hp:storage.msa1040.151725e557 (non-flash)
root@n1:~# iscsiadm -m node --targetname iqn.1986-03.com.hp:storage.msa1040.151725e557 --login
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: Could not log into all portals
Also wondering what people do for BCP on size+number of LUNs vs sessions per link (from iscsid.conf):
...
# For multipath configurations, you may want more than one session to be
# created on each iface record. If node.session.nr_sessions is greater
# than 1, performing a 'login' for that node will ensure that the
# appropriate number of sessions is created.
node.session.nr_sessions = 1
...
I'm thinking of doing a reasonable number of 100GB LUNs (50-200), like if I might have with physical spindles. Thus having more IO queues to spread more parallel IOPs across (like with BCP for zfs on Solaris for Oracle rdbms). And every lun/device is then multipathed over 4x iscsi sessions/paths over 10Gbs links (one session per physical link). Shouldn't this be just as good as multipathed 8Gbs FC HBAs? But if I increase nr_sessions I just get more sessions per link, dunno if that's overkill with multiple paths over same physical link... any thoughts on this?
Sep 29 03:56:13 n1 pvedaemon[29114]: command '/usr/bin/iscsiadm --mode node --targetname iqn.1986-03.com.hp:storage.msa1040.151725e557 --login' failed: exit code 15Sep 29 03:56:46 n1 pvedaemon[39090]: command '/usr/bin/iscsiadm --mode node --targetname iqn.1986-03.com.hp:storage.msa1040.151725e557 --login' failed: exit code 15
Sep 29 03:57:20 n1 pvedaemon[39090]: command '/usr/bin/iscsiadm --mode node --targetname iqn.1986-03.com.hp:storage.msa1040.151725e557 --login' failed: exit code 15
Sep 29 03:57:53 n1 pvedaemon[29114]: command '/usr/bin/iscsiadm --mode node --targetname iqn.1986-03.com.hp:storage.msa1040.151725e557 --login' failed: exit code 15
Sep 29 03:58:27 n1 pvedaemon[27368]: command '/usr/bin/iscsiadm --mode node --targetname iqn.1986-03.com.hp:storage.msa1040.151725e557 --login' failed: exit code 15
root@n1:~# iscsiadm -m session
tcp: [1] 10.2.0.2:3260,2 iqn.1986-03.com.hp:storage.msa1040.151725e557 (non-flash)
tcp: [2] 10.2.0.1:3260,1 iqn.1986-03.com.hp:storage.msa1040.151725e557 (non-flash)
tcp: [3] 10.2.1.1:3260,3 iqn.1986-03.com.hp:storage.msa1040.151725e557 (non-flash)
tcp: [4] 10.2.1.2:3260,4 iqn.1986-03.com.hp:storage.msa1040.151725e557 (non-flash)
root@n1:~# iscsiadm -m node --targetname iqn.1986-03.com.hp:storage.msa1040.151725e557 --login
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: default: 1 session requested, but 1 already present.
iscsiadm: Could not log into all portals
Also wondering what people do for BCP on size+number of LUNs vs sessions per link (from iscsid.conf):
...
# For multipath configurations, you may want more than one session to be
# created on each iface record. If node.session.nr_sessions is greater
# than 1, performing a 'login' for that node will ensure that the
# appropriate number of sessions is created.
node.session.nr_sessions = 1
...
I'm thinking of doing a reasonable number of 100GB LUNs (50-200), like if I might have with physical spindles. Thus having more IO queues to spread more parallel IOPs across (like with BCP for zfs on Solaris for Oracle rdbms). And every lun/device is then multipathed over 4x iscsi sessions/paths over 10Gbs links (one session per physical link). Shouldn't this be just as good as multipathed 8Gbs FC HBAs? But if I increase nr_sessions I just get more sessions per link, dunno if that's overkill with multiple paths over same physical link... any thoughts on this?
Last edited by a moderator: