[SOLVED] multipath on iSCSI

aasami

Renowned Member
Mar 9, 2016
56
11
73
24
Hello all!
I would like to ask for help with configuration of multipath on iSCSI disk in Proxmox 6.2.
I have configured multipath on the server:
Code:
[hp12 ~]# iscsiadm -m session
tcp: [1] 10.1.100.112:3260,22 iqn.2000-05.com.3pardata:20220002ac005aab (non-flash)
tcp: [4] 10.1.100.113:3260,121 iqn.2000-05.com.3pardata:21210002ac005aab (non-flash)
tcp: [5] 10.1.100.114:3260,122 iqn.2000-05.com.3pardata:21220002ac005aab (non-flash)
tcp: [6] 10.1.100.111:3260,21 iqn.2000-05.com.3pardata:20210002ac005aab (non-flash)
[hp12 ~]#
[hp12 ~]# multipath -ll
hp3par (360002ac0000000000000001e00005aab) dm-2 3PARdata,VV
size=3.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 2:0:0:0 sdb 8:16 active ready running
  |- 4:0:0:0 sdd 8:48 active ready running
  |- 2:0:1:0 sdc 8:32 active ready running
  `- 4:0:1:0 sde 8:64 active ready running
[hp12 ~]#
[hp12 ~]# dmsetup ls --target multipath
hp3par       (253, 2)
[hp12 ~]#
[hp12 ~]# dmsetup info hp3par
Name:              hp3par
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        0
Event number:      0
Major, minor:      253, 2
Number of targets: 1
UUID: mpath-360002ac0000000000000001e00005aab

[hp12 ~]#
Now the question is: Do I need to add several iSCSI storages in Proxmox GUI all with different "Portal" IPs (10.1.100.111, 10.1.100.112, 10.1.100.113, 10.1.100.114) to get the desired redundancy?
Or am I completely wrong and need to do something else to configure iSCSI storage with redundant multipath?
There is no coverage on multipath in official docu. I have mostly followed this wiki article.
Thank you for any advice beforehand.
 
In my understanding you would not tell Proxmox to create (another) iSCSI-Resource. From the top-level Proxmox-GUI that multipath-Device should look and feel like a local disc. The device mapper will supply you by a device /dev/mapper/hp3par which is equivalent to a local disk. The redundancy of multipath is working below that layer. There is no need for PVE to know about that fact.

This is a raw block device. You need to format it. Personally I prefer ZFS, but I am not sure if this is recommended - ZFS has no access to the actual hardware in this constellation...

Just my understanding. I do NOT use a comparable configuration (but would like to...).
 
  • Like
Reactions: aasami
Thank you @UdoB ! That is exactly what I was hoping to hear. I have removed iSCSI storage in GUI and created new LVM storage on top of existing volume group (VG).
Code:
# pvdisplay /dev/mapper/hp3par
  --- Physical volume ---
  PV Name               /dev/mapper/hp3par
  VG Name               vg01
  PV Size               3,00 TiB / not usable 0  
  Allocatable           yes
  PE Size               4,00 MiB
  Total PE              786428
  Free PE               773372
  Allocated PE          13056
  PV UUID               fHkxtx-WXak-txKc-CS0O-uxsk-jpbc-Axg39a
The VG uses multipathed PV /dev/mapper/hp3par exactly what I wanted.
Thank you again my friend.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!