[PVE 8.x] Dell ME5024 iSCSI LUN detected but multipath -ll shows nothing (paths grouped as undef)

hud1797

New Member
Aug 12, 2025
2
1
3
Environment
  • Proxmox VE: 8.4.0 (running kernel: 6.8.12-9-pve)
  • Host: Single PVE host (will be shared later across cluster)
  • iSCSI Target: Dell ME5024 (ME5 series), 1 volume/LUN (~28 TB)
  • Switching: Dual switches, dual-fabric iSCSI (separate subnets)
  • Jumbo MTU: 9000 end-to-end

PVE iSCSI network interfaces

iface eno12419np2 inet static
address 192.168.10.11/24
mtu 9000

auto eno12429np3
iface eno12429np3 inet static
address 192.168.20.11/24
mtu 9000

ME5024 iSCSI ports
  • A0 = 192.168.10.13
  • A2 = 192.168.20.13
  • B0 = 192.168.10.14
  • B2 = 192.168.20.14
Sessions are up

tcp: [1] 192.168.20.13:3260,5 iqn.1988-11.com.dell:01.array.bc305b5e7cc1 (non-flash)
tcp: [2] 192.168.20.14:3260,6 iqn.1988-11.com.dell:01.array.bc305b5e7cc1 (non-flash)
tcp: [3] 192.168.10.13:3260,1 iqn.1988-11.com.dell:01.array.bc305b5e7cc1 (non-flash)
tcp: [4] 192.168.10.14:3260,2 iqn.1988-11.com.dell:01.array.bc305b5e7cc1 (non-flash)

Initiator name

/etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1993-08.org.debian:01:57a347da1346

LUN visibility (all 4 paths present)

sda 447.1G ATA INTEL SSDSC2BB48
sdb 3.3T DELL PERC H755 Front
sdc 223.5G ATA DELLBOSS VD
sdd 27.7T DellEMC ME5
sde 27.7T DellEMC ME5
sdf 27.7T DellEMC ME5
sdg 27.7T DellEMC ME5


All paths share the same WWID

/lib/udev/scsi_id -g -u -d /dev/sd[d-g]
3600c0ff0006648c9d144de6801000000 (same for sdd/sde/sdf/sdg)

Problem​

  • multipathd is running and kernel modules are loaded (dm_multipath, dm_round_robin, scsi_dh_alua).
  • But multipath -ll shows no output (no maps).
  • multipath -d /dev/sdd (and others) shows the device is recognized and grouped, but stuck at undef and the map is not activated:
30258.267962 | parse_vpd_pg83: invalid device designator at offset 32: 01000020
30258.267975 | sdd: uid = 3600c0ff0006648c9d144de6801000000 (sysfs)
: me5024_lun1 (3600c0ff0006648c9d144de6801000000) undef DellEMC,ME5
size=28T features='1 queue_if_no_path' hwhandler='1 alua' wp=undef
`-+- policy='service-time 0' prio=10 status=undef
`- 17:0:0:1 sdd 8:48 undef ready running

/etc/multipath.conf
--------------------------------
defaults {
user_friendly_names yes
find_multipaths no
polling_interval 5
}

blacklist {
devnode "^sda" # your OS disk
devnode "^sdb" # PERC controller
devnode "^sdc" # Dell BOSS card
}

blacklist_exceptions {
wwid "3600c0ff0006648c9d144de6801000000"
}

devices {
device {
vendor "DellEMC"
product "ME5"
path_grouping_policy "group_by_prio"
path_checker "tur"
prio "alua"
hardware_handler "1 alua"
failback immediate
rr_weight "uniform"
path_selector "service-time 0"
no_path_retry 30
fast_io_fail_tmo 25
}
}


multipaths {
multipath {
wwid 3600c0ff0006648c9d144de6801000000
alias me5024_lun1
}
}

Please help. Thank you.
 
Hi @hud1797 , welcome to the forum.

There was an issue with one of the critical packages being broken upstream (sg3_utils) in Trixie/PVE9. The way it was broken could have resulted in what you see. However, the issue was addressed in PVE9. As far as I know the problematic code did not affect Bookworm/PVE8.

Some troubleshooting commands to run:
multipath -v3 -ll
/lib/udev/scsi_id -g -u -d /dev/sdd
/lib/udev/scsi_id -g -u -d /dev/sde
ls -l /dev/disk/by-id/ | grep 3600c0ff
udevadm info -q all -n /dev/sdd # look for ID_VENDOR, ID_MODEL, DM_MULTIPATH_DEVICE_PATH


multipath -F
multipath -v3
journalctl -f -u multipathd

# in another shell:
multipath -r



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hello Proxmox Community.

I’d like to share my progress on integrating DellEMC ME5024 storage with Proxmox VE using iSCSI and multipath.

Steps Completed​

Created iSCSI Interfaces for Dedicated NICs

After I run iscsiadm -m session -P 3 | grep -E "Target|Iface|Current Portal"

Target: iqn.1988-11.com.dell:01.array.bc305b5e7cc1 (non-flash)
Current Portal: 192.168.20.13:3260,5
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:12ef5780a17f
Iface IPaddress: 192.168.20.11
Iface HWaddress: default
Iface Netdev: default
Target Reset Timeout: 30

Current Portal: 192.168.20.14:3260,6
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:12ef5780a17f
Iface IPaddress: 192.168.20.11
Iface HWaddress: default
Iface Netdev: default
Target Reset Timeout: 30

Current Portal: 192.168.10.14:3260,2
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:12ef5780a17f
Iface IPaddress: 192.168.10.11
Iface HWaddress: default
Iface Netdev: default
Target Reset Timeout: 30

Current Portal: 192.168.10.13:3260,1
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:12ef5780a17f
Iface IPaddress: 192.168.10.11
Iface HWaddress: default
Iface Netdev: default
Target Reset Timeout: 30

Every session is using Iface Name: default / Iface Netdev: default. That means open-iscsi is not pinning each portal to its NIC. Linux thinks it can send traffic out any interface → sometimes it goes out the wrong NIC → path errors (DID_TRANSPORT_DISRUPTED).

So I create two iface configs and rebind:

# For 192.168.10.x network
iscsiadm -m iface -I iface10 -o new
iscsiadm -m iface -I iface10 -o update -n iface.net_ifacename -v eno12419np2

# For 192.168.20.x network
iscsiadm -m iface -I iface20 -o new
iscsiadm -m iface -I iface20 -o update -n iface.net_ifacename -v eno12429np3

And check them:

iscsiadm -m iface

You should see iface10 (bound to eno12419np2) and iface20 (bound to eno12429np3).

Log out existing sessions

iscsiadm -m node -U all

Then Re-discover and re-login with correct binding

# 10.x ports
iscsiadm -m discovery -t sendtargets -p 192.168.10.13 -I iface10
iscsiadm -m discovery -t sendtargets -p 192.168.10.14 -I iface10
iscsiadm -m node -p 192.168.10.13:3260 -I iface10 --login
iscsiadm -m node -p 192.168.10.14:3260 -I iface10 --login

# 20.x ports
iscsiadm -m discovery -t sendtargets -p 192.168.20.13 -I iface20
iscsiadm -m discovery -t sendtargets -p 192.168.20.14 -I iface20
iscsiadm -m node -p 192.168.20.13:3260 -I iface20 --login
iscsiadm -m node -p 192.168.20.14:3260 -I iface20 --login

Then restart multipath

systemctl restart multipathd
multipath -F
multipath -r
multipath -ll

Verified iSCSI Sessions

iscsiadm -m session -P 3 | grep -E "Current Portal|Iface Netdev"

Output shows each portal bound to the correct NIC:
  • 192.168.10.x → eno12419np2
  • 192.168.20.x → eno12429np3
Configured Multipath
After restarting multipathd and reloading, the device now appears properly:
me5024_lun1 (3600c0ff0006648c9b5fadf6801000000) dm-15 DellEMC,ME5
size=28T features='0' hwhandler='1 alua' wp=rw
|-+ policy='service-time 0' prio=50 status=active
| |- 15:0:0:1 sdd 8:48 active ready running
| `- 18:0:0:1 sdf 8:80 active ready running
`-+ policy='service-time 0' prio=10 status=enabled
|- 17:0:0:1 sde 8:64 active ready running
`- 19:0:0:1 sdg 8:96 active ready running

✅ Current Status​

  • iSCSI targets are successfully integrated.
  • Multipath is active and showing all four paths correctly with ALUA.

Ohh, and actually I forgot to configure the switch that connect to storage to MTU 9000. That can be the reason I was stuck also. lol




Hopefully this helps others working with Proxmox + DellEMC ME5 iSCSI multipath setups.
 
  • Like
Reactions: waltar