Quick HOWTO on setting up iSCSI Multipath

uptonguy75

New Member
Nov 15, 2024
9
1
3
Hi Everyone,

I originally included this HOWTO guide as a reply to someone else's post but am posting in its own thread as it may help others who struggle to get proper iSCSI MPIO working on a Promox cluster. Coming from an enterprise VMware ESXi background, I wanted my shared storage setup the same way in Promox (albeit without LVM snapshots but Veeam fills this gap). It should be noted that this configuration is entirely done from shell and not through the web UI. I found that the GUI iSCSI storage setup does not work for MPIO as it only creates a single PVE NIC-to-iSCSI IP link which isn't helpful when the SAN has redundant IPs. The CLI process will create an MPIO that covers all PVE NICs-to-iSCSI IP links. The MPIO alias is then added to the PVE storage list (like a Datastore in ESXi). I also included a quick & dirty MPIO check script to add to the cron that will check MPIO status every minute and send an email alert should anything about MPIO change.

Best,
UG
 

Attachments

Thank you. Could you please elaborate what your guide does differently than the official documentation?

https://pve.proxmox.com/wiki/Multipath

Hi LnxBil,

In the "Multipath" guide, this didn't work for me:

"Then, configure your iSCSI storage on the GUI ("Datacenter->Storage->Add->iSCSI"). In the "Add: iSCSI" dialog, enter the IP of an arbitrary portal. Usually, the iSCSI target advertises all available portals back to the iSCSI initiator, and in the default configuration, Proxmox VE will try to connect to all advertised portals."
By adding the iSCSI entry via the GUI, it only created a single new iscsi interface called "default" which was tied to only 1 of my PVE iSCSI NIC's. As I have 2 PVE iSCSI NIC's which need to be part of the multipath, it worked out better for me to individually scan iSCSI portals via each of the 2 NICs:

iscsiadm -m discovery -I <nic_if1> --op=new --op=del --type sendtargets --portal <portal IP>:3260 <target initiator>
iscsiadm -m discovery -I <nic_if2> --op=new --op=del --type sendtargets --portal <portal IP>:3260 <target initiator>

This way, 2 separate interface entries were added to the iSCSI configuration and 8 paths were established (2 PVE NICs x 4 SAN NICs) when using multipath -ll.

Once the multipath mapper device is added to PVE (pvesm add lvm <Datastore ID> --vgname <Datastore ID>), there is clutter in the Datacenter->Storage list as the original GUI-added non-multipath iSCSI entry isn't needed once the multipath LVM device is listed.

Later this week, I'll be setting up a new node so I can better document where the process fell apart for me and will re-post then.
 
Last edited:
As promised, when I added a new node this week, I took a closer look at the "Storage: iSCSI" & "Storage: Multipath" wiki pages. I significantly overhauled my iSCSI MPIO guide and re-uploaded it.

The "Storage: iSCSI" & "Storage: Multipath" pages don't address configuring multiple iSCSI interfaces per node. Without this, iSCSI connections will only use a single NIC (called "default" in the iSCSI configs). Therefore the MPIO config will only use half the available paths in a dual iSCSI NIC setup.

Coming from an enterprise VMware environment, I wrote the guide to simplify the process of getting shared ISCSI LVM storage w/ MPIO working in a Proxmox cluster. The guide covers:
  • Configuring dual iSCSI NICs
  • Configuring iSCSI with dual NICs
  • Adding a new shared iSCSI LUN with MPIO
  • Creating a new LVM Datastore on shared iSCSI LUN
  • Adding an existing shared iSCSI LVM Datastore
  • Removing an iSCSI LVM Datastore
  • Configuring watchdogs to generate email alerts:
    • Monitoring changes to MPIO status
    • Monitoring changes to Datastore availability status
  • Extra helpful scripts:
    • Show only iSCSI connections for specific target
    • Show WWID mapped to specific block device
I hope that it will be useful to others who are looking to do the same thing.
 

Attachments

Hi, thank you for sharing your setup steps! I'd like to better understand in which sense the current Multipath wiki article [1] doesn't cover your setup. I see your guide uses the Open-iSCSI ifaces [1] feature where Open-iSCSI manages the network interfaces that connect to the SAN, whereas the wiki article doesn't -- there, the PVE host's network stack handles connectivity.

Do I understand correctly the main reason for using the ifaces feature is that your SAN has two IPs in the same subnet (10.10.42.x/24 in your guide) and thus multipathing isn't really possible when using the host's network stack? If yes -- does the SAN also support configuring the two IPs in two disjoint subnets? If so and if the SAN has two IPs in two disjoint subnets, it should be possible to assign the PVE node two IPs in the two disjoint subnets, and let the PVE host's network stack handle the networking (like in the wiki article).

[1] https://pve.proxmox.com/wiki/Multipath
[2] https://github.com/open-iscsi/open-iscsi/blob/df0f2bf9cba81333b9d171bfd0635eda522fcb5b/README#L586
 
Question: how would you go about using multipath for storage type iscsi (direct map)?
With direct map you mean the QEMU integration that is also used for e.g. ZFS-over-iSCSI? AFAIK, QEMU did not have support for this in the past, yet I don't know the current status and if it has been implemented already.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!