iSCSI(mulptipath) disks in RAID

sand-max

Well-Known Member
Apr 13, 2016
38
1
48
40
Hello guys!
I have 2 SAN HP-MSA2040 in difference location and have idea use 2 targets (from each Storage).
And I've done next:
Created targets on each storage and connected initiator from pvenode to ones, target have 4 way so use multipath :

root@pvenode3:~# multipath -ll
MSA2040-1-iSCSI-mpath (3600c0ff0001e66d4c172b45c01000000) dm-31 HP,MSA 2040 SAN
size=14G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 9:0:0:4 sdf 8:80 active ready running
| `- 8:0:0:4 sdg 8:96 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 10:0:0:4 sdh 8:112 active ready running
`- 11:0:0:4 sdi 8:128 active ready running

MSA2040-2-iSCSI-mpath (3600c0ff0001e216b097bb45c01000000) dm-1 HP,MSA 2040 SAN
size=14G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 14:0:0:4 sdl 8:176 active ready running
| `- 15:0:0:4 sdm 8:192 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 12:0:0:4 sdj 8:144 active ready running
`- 13:0:0:4 sdk 8:160 active ready running
root@pvenode3:~#


Next step added disks dm-31 dm-1 in soft RAID 1:

mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/dm-1 /dev/dm-31
root@pvenode3:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 dm-30[1] dm-1[0]
9756672 blocks super 1.2 [2/2] [UU]


Created Physical volume, VG Group and Logical Volume.

But after reboot server it does not work:

In RAID added iSCSI disks (no multipath /dev/dmX)

root@pvenode3:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdj[1] sdf[0]
14639104 blocks super 1.2 [2/2] [UU]

unused devices: <none>


root@pvenode3:~# multipath -ll
MSA2040-2-iSCSI-mpath (3600c0ff0001e216b097bb45c01000000) dm-1 HP,MSA 2040 SAN
size=14G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=10 status=active
`- 14:0:0:4 sdk 8:160 active ready running
root@pvenode3:~#


I tried change queue of load in rc3.d for mdadm for loading after iscsid and multipath, but unsuccessfully:

lrwxrwxrwx 1 root root 16 Feb 28 14:01 S05iscsid -> ../init.d/iscsid
lrwxrwxrwx 1 root root 20 Feb 28 14:01 S06open-iscsi -> ../init.d/open-iscsi
lrwxrwxrwx 1 root root 25 Mar 1 12:27 S07multipath-tools -> ../init.d/multipath-tools
lrwxrwxrwx 1 root root 15 Apr 15 12:57 S08mdadm -> ../init.d/mdadm


While found next solving:

Remove 1st disk from md0 then unlogin/login to 1st target (/dev/dmX appeared in multipath) and add disk /dev/dmX to md0.
Then same actions for 2nd disk and target.

Have anybody ideas ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!