ISCSI and multipath

sand-max

Renowned Member
Apr 13, 2016
38
1
73
41
Virtual Environment 5.4-3

Hi guys!
I have iscsi target on linux debian created by tgt. Available via 2 interfaces.
And have two pve nodes in cluster. I added iscsi storage by GUI without problem. But I need add target with 2 portals.
As I know need add target with each portal , and configure multipath.
I have done follow on 2 nodes:

root@pvenode-01:~# iscsiadm -m node -T iqn.2018-02.sandmax.com:pve -p 192.168.1.41 -l
Logging in to [iface: default, target: iqn.2018-02.sandmax.com:pve, portal: 192.168.1.41,3260] (multiple)
Login to [iface: default, target: iqn.2018-02.sandmax.com:pve, portal: 192.168.1.41,3260] successful.

root@pvenode-01:~# iscsiadm -m node -T iqn.2018-02.sandmax.com:pve -p 192.168.1.42 -l
Logging in to [iface: default, target: iqn.2018-02.sandmax.com:pve, portal: 192.168.1.42,3260] (multiple)
Login to [iface: default, target: iqn.2018-02.sandmax.com:pve, portal: 192.168.1.42,3260] successful.

But I don't see devices as /dev/sdX

[ 302.914540] scsi host3: iSCSI Initiator over TCP/IP
[ 302.923324] scsi 3:0:0:0: RAID IET Controller 0001 PQ: 0 ANSI: 5
[ 302.924347] scsi 3:0:0:0: Attached scsi generic sg2 type 12
[ 307.349116] scsi host4: iSCSI Initiator over TCP/IP
[ 307.358470] scsi 4:0:0:0: RAID IET Controller 0001 PQ: 0 ANSI: 5
[ 307.359391] scsi 4:0:0:0: Attached scsi generic sg3 type 12

root@pvenode-01:~# lsscsi
[1:0:0:0] cd/dvd VBOX CD-ROM 1.0 /dev/sr0
[2:0:0:0] disk ATA VBOX HARDDISK 1.0 /dev/sda
[3:0:0:0] storage IET Controller 0001 -
[4:0:0:0] storage IET Controller 0001 -
 
show multipath file.

I have not created multipath.conf yet. Because I need to know wwid of disks. I may get ones like this :
/lib/udev/scsi_id -g -u -d /dev/sda
But I don't have /dev/sdX, as I wrote above.
That is output wwids file of multipath:
root@pvenode-01:~# cat /etc/multipath/wwids
# Multipath wwids, Version : 1.0
# NOTE: This file is automatically maintained by multipath and multipathd.
# You should not need to edit this file in normal circumstances.
#
# Valid WWIDs:
root@pvenode-01:~#


And btw I tried create initiator on simple Debian server(not pxoxmox). Like looks successful:

root@iscsi-client1:/home/sandmax# lsscsi
[0:0:0:0] disk ATA VBOX HARDDISK 1.0 /dev/sda
[2:0:0:0] cd/dvd VBOX CD-ROM 1.0 /dev/sr0
[3:0:0:0] storage IET Controller 0001 -
[3:0:0:1] disk IET VIRTUAL-DISK 0001 /dev/sdc
[4:0:0:0] storage IET Controller 0001 -
[4:0:0:1] disk IET VIRTUAL-DISK 0001 /dev/sdb
root@iscsi-client1:/home/sandmax#

And I have not created multipath.conf. It's worked automatically:

root@iscsi-client1:/home/sandmax# cat /etc/multipath/wwids
# Multipath wwids, Version : 1.0
# NOTE: This file is automatically maintained by multipath and multipathd.
# You should not need to edit this file in normal circumstances.
#
# Valid WWIDs:
/360000000000000000e00000000010001/
root@iscsi-client1:/home/sandmax# multipath -ll
360000000000000000e00000000010001 dm-0 IET,VIRTUAL-DISK
size=8.0G features='1 retain_attached_hw_handler' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active
| `- 4:0:0:1 sdb 8:16 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
`- 3:0:0:1 sdc 8:32 active ready running
root@iscsi-client1:/home/sandmax#
 
did you add all the necessary ACLs on the target side? (You need to permit each initiator's iqn in targetcli)
 
did you add all the necessary ACLs on the target side? (You need to permit each initiator's iqn in targetcli)
My mistake, was trouble with creating backing-store.

Now I have 2 link to one target on each node.

:~# multipath -ll
360000000000000000e00000000010001 dm-2 IET,VIRTUAL-DISK
size=8.0G features='1 retain_attached_hw_handler' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active
| `- 3:0:0:1 sdb 8:16 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
`- 4:0:0:1 sdc 8:32 active ready running
root@pvenode-02:~#

I mounted disk /dev/dm-2 on each node
/dev/mapper/360000000000000000e00000000010001 7.9G 37M 7.4G 1% /mnt

Then I added Directory on GUI and create LXC on node-01 but vm-disk has not appear on node-2
root@pvenode-01:~# ls -lah /mnt/images/101/
total 430M
-rw-r----- 1 root root 2.0G Apr 15 16:19 vm-101-disk-0.raw

root@pvenode-02:~# ls -lah /mnt/images/101
ls: cannot access '/mnt/images/101': No such file or directory

How can I use one storage for 2 nodes?
 
hm - what kind of filesystem did you put on the iSCSI luns? - You need either a cluster-filesystem like ocfs2 or gfs2, or you can use PVE's LVM on top of iSCSI storage - see https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_storage

also it seems like both of your paths for multipath are on the same network 192.168.1.41 192.168.1.42 - please keep in mind that this usually does not work as expected - since the PVE node only takes the first configured route for this network (and this is the first one configured in your interfaces file) - i.e. if the interface where the route is configured goes down, the seconde link will not help you.

Hope this helps!
 
hm - what kind of filesystem did you put on the iSCSI luns? - You need either a cluster-filesystem like ocfs2 or gfs2, or you can use PVE's LVM on top of iSCSI storage - see https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_storage

also it seems like both of your paths for multipath are on the same network 192.168.1.41 192.168.1.42 - please keep in mind that this usually does not work as expected - since the PVE node only takes the first configured route for this network (and this is the first one configured in your interfaces file) - i.e. if the interface where the route is configured goes down, the seconde link will not help you.

Hope this helps!

Thanks!
Created LVM (Seems it more simple) and it's worked!

About interface , will planning use 2 interfaces in 'bond' on initiator.
 
hm - what kind of filesystem did you put on the iSCSI luns? - You need either a cluster-filesystem like ocfs2 or gfs2, or you can use PVE's LVM on top of iSCSI storage - see https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_storage

also it seems like both of your paths for multipath are on the same network 192.168.1.41 192.168.1.42 - please keep in mind that this usually does not work as expected - since the PVE node only takes the first configured route for this network (and this is the first one configured in your interfaces file) - i.e. if the interface where the route is configured goes down, the seconde link will not help you.

Hope this helps!


Hi Ivanov,

I have question about this. "since the PVE node only takes the first configured route for this network "

Does this only apply if the ISCSI storage in different network?

In our setup we have 4x pve nodes and 1x storage node with 4xIPs on storage node. Both pve and storage node share the same storage network same subnet. (/24). How multipath works with such situation. what happen if the interface 1 of storage node fails
 
In our setup we have 4x pve nodes and 1x storage node with 4xIPs on storage node. Both pve and storage node share the same storage network same subnet. (/24). How multipath works with such situation. what happen if the interface 1 of storage node fails
hm - You should and need to test this in your environment!
However from memory if the Storage looses an interface and the initiator (PVE-node) recognizes this (i.e. no answers from the interfaces IP) multipath should pick the other IP for that storage.
IIRC the problematic part is where one of the interfaces on the PVE-node goes down, without PVE noticing it (e.g. cable breaks/is unplugged) - if the routing table does not change it would still try to reach the storage from the first interface (which is unplugged).

As said - try it in your environment - that's the only way you'd get some confidence that it works in production.

Else if you're on the same network segement you could consider using a bond interface (either 802.3ad or active-backup) - that usually goes quite well in one layer2 network.

I hope this helps!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!