Hello,
And thank you for your help.
I have installed a PVE cluster with 3 nodes and a DELL ARRAY.
The 2 controllesr on the array have 4 ports each (see PJ) i connected them to 2 switches in 2 non routed VLAN dedicated to iSCSI (70 et 77as you can see in the attachment)
I have 2 big LUNS on the array . I want to do LVM over iscsI. So created my first volume on the first LUN and i mapped it to the 3 PVE hosts in the array GUI.
But i can't get the multipath to work
I can really well see everything from the 3 PVE hosts :
But each time I map the volume, without even having even confiured multipath, my PVE are in a hanging state and at some time appear again nut it's not stable.
I have these kind of errors :
so then i try the multipath
but it's not working it says my wwid is blacklisted but it's in the exception!:
Mar 01 15:26:23 | sdd: path_checker = tur (setting: storage device autodetected)
Mar 01 15:26:23 | sdd: checker timeout = 30 s (setting: kernel sysfs)
Mar 01 15:26:23 | sdd: tur state = up
Mar 01 15:26:23 | sdd: uid_attribute = ID_SERIAL (setting: multipath.conf defaults/devices section)
Mar 01 15:26:23 | sdd: uid = 3600c0ff00065eb293359ff6301000000 (udev)
Mar 01 15:26:23 | sdd: wwid 3600c0ff00065eb293359ff6301000000 blacklisted
So i have 2 questions :
- What can be wrong? The multipath?
- My array has only one IQN so i only put this one with one IP address of the 8 controllers in the storage.cfg like this :
iscsi: ME4024
portal 192.168.70.1
target iqn.1988-11.com.dell:01.array.bc305b5e705f
content none
Should i put each one of the controllers (8 ) in the storage.conf even if it has only one IQN?
thanks for you insight
And thank you for your help.
I have installed a PVE cluster with 3 nodes and a DELL ARRAY.
The 2 controllesr on the array have 4 ports each (see PJ) i connected them to 2 switches in 2 non routed VLAN dedicated to iSCSI (70 et 77as you can see in the attachment)
I have 2 big LUNS on the array . I want to do LVM over iscsI. So created my first volume on the first LUN and i mapped it to the 3 PVE hosts in the array GUI.
But i can't get the multipath to work
I can really well see everything from the 3 PVE hosts :
Code:
root@pve2:~# iscsiadm -m session -P 1
Target: iqn.1988-11.com.dell:01.array.bc305b5e705f (non-flash)
Current Portal: 192.168.70.3:3260,2
Persistent Portal: 192.168.70.3:3260,2
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:e258a2db6cc
Iface IPaddress: 192.168.70.101
Iface HWaddress: default
Iface Netdev: default
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Current Portal: 192.168.70.2:3260,5
Persistent Portal: 192.168.70.2:3260,5
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:e258a2db6cc
Iface IPaddress: 192.168.70.101
Iface HWaddress: default
Iface Netdev: default
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Current Portal: 192.168.77.4:3260,8
Persistent Portal: 192.168.77.4:3260,8
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:e258a2db6cc
Iface IPaddress: 192.168.77.101
Iface HWaddress: default
Iface Netdev: default
SID: 3
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Current Portal: 192.168.77.3:3260,4
Persistent Portal: 192.168.77.3:3260,4
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:e258a2db6cc
Iface IPaddress: 192.168.77.101
Iface HWaddress: default
Iface Netdev: default
SID: 4
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Current Portal: 192.168.77.2:3260,7
Persistent Portal: 192.168.77.2:3260,7
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:e258a2db6cc
Iface IPaddress: 192.168.77.101
Iface HWaddress: default
Iface Netdev: default
SID: 5
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Current Portal: 192.168.70.1:3260,1
Persistent Portal: 192.168.70.1:3260,1
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:e258a2db6cc
Iface IPaddress: [default]
Iface HWaddress: default
Iface Netdev: default
SID: 6
iSCSI Connection State: TRANSPORT WAIT
iSCSI Session State: FAILED
Internal iscsid Session State: REOPEN
Current Portal: 192.168.77.1:3260,3
Persistent Portal: 192.168.77.1:3260,3
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:e258a2db6cc
Iface IPaddress: 192.168.77.101
Iface HWaddress: default
Iface Netdev: default
SID: 7
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Current Portal: 192.168.70.4:3260,6
Persistent Portal: 192.168.70.4:3260,6
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:e258a2db6cc
Iface IPaddress: 192.168.70.101
Iface HWaddress: default
Iface Netdev: default
SID: 8
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
But each time I map the volume, without even having even confiured multipath, my PVE are in a hanging state and at some time appear again nut it's not stable.
I have these kind of errors :
Code:
Mar 1 15:23:43 pve2 kernel: [ 5701.374972] sd 11:0:0:1: [sdb] tag#614 FAILED Result: hostbyte=DID_TRANSPORT_DISRUPTED driverbyte=DRIVER_OK cmd_age=212s
Mar 1 15:23:43 pve2 kernel: [ 5701.374974] sd 11:0:0:1: [sdb] tag#614 CDB: Read(10) 28 00 00 00 00 48 00 00 30 00
Mar 1 15:23:43 pve2 kernel: [ 5701.374974] blk_update_request: I/O error, dev sdb, sector 72 op 0x0:(READ) flags 0x80700 phys_seg 6 prio class 0
Mar 1 15:23:43 pve2 multipathd[13212]: sdb: failed to get udev uid: No data available
Mar 1 15:23:43 pve2 multipathd[13212]: sdb: spurious uevent, path not found
Mar 1 15:23:43 pve2 iscsid: Kernel reported iSCSI connection 1:0 error (1022 - ISCSI_ERR_NOP_TIMEDOUT: A NOP has timed out) state (3)
Mar 1 15:23:47 pve2 iscsid: connection1:0 is operational after recovery (1 attempts)
Mar 1 15:23:49 pve2 pmxcfs[1609]: [status] notice: received log
Mar 1 15:23:51 pve2 kernel: [ 5709.560567] connection3:0: ping timeout of 5 secs expired, recv timeout 5, last rx 4296316928, last ping 4296318208, now 4296319488
Mar 1 15:23:51 pve2 kernel: [ 5709.562445] connection3:0: detected conn error (1022)
Mar 1 15:23:52 pve2 iscsid: Kernel reported iSCSI connection 3:0 error (1022 - ISCSI_ERR_NOP_TIMEDOUT: A NOP has timed out) state (3)
Mar 1 15:23:55 pve2 iscsid: connection3:0 is operational after recovery (1 attempts)
Mar 1 15:23:56 pve2 systemd-udevd[787]: sdc: Worker [47717] processing SEQNUM=8722 is taking a long time
Mar 1 15:24:05 pve2 kernel: [ 5722.872553] connection6:0: ping timeout of 5 secs expired, recv timeout 5, last rx 4296320256, last ping 4296321536, now 4296322816
Mar 1 15:24:05 pve2 kernel: [ 5722.875011] connection6:0: detected conn error (1022)
Mar 1 15:24:05 pve2 iscsid: Kernel reported iSCSI connection 6:0 error (1022 - ISCSI_ERR_NOP_TIMEDOUT: A NOP has timed out) state (3)
Mar 1 15:24:08 pve2 iscsid: connection6:0 is operational after recovery (1 attempts)
Mar 1 15:24:12 pve2 kernel: [ 5729.784549] connection4:0: ping timeout of 5 secs expired, recv timeout 5, last rx 4296321984, last ping 4296323264, now 4296324544
Mar 1 15:24:12 pve2 kernel: [ 5729.787074] connection4:0: detected conn error (1022)
Mar 1 15:24:12 pve2 iscsid: Kernel reported iSCSI connection 4:0 error (1022 - ISCSI_ERR_NOP_TIMEDOUT: A NOP has timed out) state (3)
Mar 1 15:24:16 pve2 iscsid: connection4:0 is operational after recovery (1 attempts)
so then i try the multipath
Code:
"/etc/multipath.conf" 44L, 891B
##Default System Values
defaults {
path_selector "round-robin 0"
uid_attribute ID_SERIAL
fast_io_fail_tmo 5
dev_loss_tmo 10
user_friendly_names yes
no_path_retry fail
find_multipaths no
max_fds 8192
polling_interval 5
}
blacklist {
wwid .*
}
blacklist_exceptions {
wwid "600C0FF00065EB293359FF6301000000"
# wwid 36f4ee0801eff310029e21c965f327cc4
}
devices {
device {
vendor "DellEMC"
product "ME4"
path_grouping_policy "group_by_prio"
path_checker "tur"
hardware_handler "1 alua"
prio "alua"
failback immediate
rr_weight "uniform"
path_selector "service-time 0"
}
}
multipaths {
multipath {
wwid "600C0FF00065EB293359FF6301000000"
alias phototeque
}
}
but it's not working it says my wwid is blacklisted but it's in the exception!:
Mar 01 15:26:23 | sdd: path_checker = tur (setting: storage device autodetected)
Mar 01 15:26:23 | sdd: checker timeout = 30 s (setting: kernel sysfs)
Mar 01 15:26:23 | sdd: tur state = up
Mar 01 15:26:23 | sdd: uid_attribute = ID_SERIAL (setting: multipath.conf defaults/devices section)
Mar 01 15:26:23 | sdd: uid = 3600c0ff00065eb293359ff6301000000 (udev)
Mar 01 15:26:23 | sdd: wwid 3600c0ff00065eb293359ff6301000000 blacklisted
So i have 2 questions :
- What can be wrong? The multipath?
- My array has only one IQN so i only put this one with one IP address of the 8 controllers in the storage.cfg like this :
iscsi: ME4024
portal 192.168.70.1
target iqn.1988-11.com.dell:01.array.bc305b5e705f
content none
Should i put each one of the controllers (8 ) in the storage.conf even if it has only one IQN?
thanks for you insight