iSCSI initiator multipath issue cannot create LVM on top of LUN, PV or VG

caslanx

New Member
Jun 4, 2025
2
0
1
Hello experts,

I recently undertake a infrastructure rebuilding project. I'm stuck at not being able to add/mount the previously created LUN/datapools to PVE.
I have 2x Dell PowerEdge R650 and 1x Power Vault ME5024.
I haven't formed the cluster yet, from the first host I'm trying to add powervault as storage.
There is a misconfiguration or I'm missing a crucial step but cannot wrap my head around. I was hoping experts would point me to the right direction.

I can wipe and format the LUNs and re-create but previous team might have left some data that I don't want to discard. also previously VMWare was being used.

ME5024 is connected directly to host via iSCSi SAS cable.
I upgraded PVE to version 9.
open-iscsi installed.

read through instructions for multipath and iscsi https://pve.proxmox.com/wiki/Multipath#Introduction and previous posts however cannot seem to make it work for my own case.

NC commands returns connection refused for both IP addresses
Code:
nc 10.57.70.200 3260
(UNKNOWN) [10.57.70.200] 3260 (iscsi-target) : Connection refused

iscsiadm discovery also returns connection refused

Code:
scsiadm -d 3 -m discovery -t st -p 10.57.70.200
iscsiadm: ip 10.57.70.200, port -1, tgpt -1
iscsiadm: Max file limits 1024 524288
iscsiadm: starting sendtargets discovery, address 10.57.70.200:3260,
iscsiadm: connecting to 10.57.70.200:3260
iscsiadm: cannot make connection to 10.57.70.200: Connection refused
iscsiadm: connecting to 10.57.70.200:3260
iscsiadm: cannot make connection to 10.57.70.200: Connection refused
iscsiadm: connecting to 10.57.70.200:3260
iscsiadm: cannot make connection to 10.57.70.200: Connection refused
iscsiadm: connecting to 10.57.70.200:3260
iscsiadm: cannot make connection to 10.57.70.200: Connection refused
iscsiadm: connecting to 10.57.70.200:3260
iscsiadm: cannot make connection to 10.57.70.200: Connection refused
iscsiadm: connecting to 10.57.70.200:3260
iscsiadm: cannot make connection to 10.57.70.200: Connection refused
iscsiadm: connection login retries (reopen_max) 5 exceeded
iscsiadm: Could not perform SendTargets discovery: iSCSI PDU timed out

Multipaths are visible and seem active, ready, running state.

1755047629236.png

1755047703372.png

Datacenter > Storage

1755047881734.png

Datacenter > Storage > Add > LVM

1755047905172.png


1755047931501.png

1755048123029.png

PVE initiatorname is different but dell storage won't allow me to change initiator name. below approved that I'm using correct naming convention but still won't accept the initiator name of the pve host.

https://www.dell.com/support/kbdoc/...-name-must-use-standard-iqn-format-convention



1755048490006.png

1755048484098.png

Thank you in advance!
 
Hi @caslanx , welcome to the forum.

ME5024 is connected directly to host via iSCSi SAS cable.
This is the primary source of your issues. There is no such thing as iSCSI SAS cable. iSCSI is a storage protocol designed to run over TCP/IP. SAS is a point-to-point serial protocol for direct-attached storage, not something that runs over Ethernet or shares cabling with iSCSI.

NC commands returns connection refused for both IP addresses
A good indication that there is no iSCSI running or enabled on your storage device. It may be capable of iSCSI but it is not configured at this moment.

Multipaths are visible and seem active, ready, running state.
Excellent, this means the host is successfully connected via SAS cables and the Multipath daemon picked up the duplicate paths.


Now remove the iSCSI storage pool in PVE, use "pvcreate" and "vgcreate" commands , then layer the LVM storage pool on top of your newly minted LVM storage.
Right now you are at this point in the article: https://kb.blockbridge.com/technote...multipath-device-as-an-lvm-physical-volume-pv

Hope this helps!



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: caslanx
Thanks for warm welcome and your comments @bbgeek17

I read through this kb article to form the "pvcreate" and "vgcreate" commands but "?" was present in a couple different try. As these pools previously used in VMWare environment they appear to carry over signatures. I assume wipe it meaning formatting the disk therefore proceed with No and that seem to form the storage but no content, size or status information was present.
I know my case is pretty particular.

As the kb articles' common pitfalls section states metadata should be matched to make the content visible. How can I view/find the metadata information of these pool? as far as I could see, lsblk, dev or multipath commands are not yielding anything about the metadata.

1755059086352.png
pvcreate command would complain either about device being already partitioned or not found or being a multipath component

1755059182129.png

1755059151618.png
Now with your guidance moving more confidently but can't seem to choose the correct physical volume or metadata is not matching.
1755059266539.png

Any advise on how to proceed?

Thank you very much!
 
pvcreate command would complain either about device being already partitioned or not found or being a multipath component
Hi @caslanx , please note you must use the multipath device (mpath) and never the underlying sdX device.
In your case it seems to be str01_pool_a_sssd.

But, the device already contains VMFS structure and you do need to wipe the "mpath" device if you wanted to re-use it.
You may be able to mount VMFS in Linux for some limited use, please search "how to mount vmfs in linux" in Google or search engine of your choice.
Note, this is only for purposes of recovering data, not daily use.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: caslanx