Hi Everyone,
I originally included this HOWTO guide as a reply to someone else's post but am posting in its own thread as it may help others who struggle to get proper iSCSI MPIO working on a Promox cluster. Coming from an enterprise VMware ESXi background, I wanted my shared storage setup the same way in Promox (albeit without LVM snapshots but Veeam fills this gap). It should be noted that this configuration is entirely done from shell and not through the web UI. I found that the GUI iSCSI storage setup does not work for MPIO as it only creates a single PVE NIC-to-iSCSI IP link which isn't helpful when the SAN has redundant IPs. The CLI process will create an MPIO that covers all PVE NICs-to-iSCSI IP links. The MPIO alias is then added to the PVE storage list (like a Datastore in ESXi). I also included a quick & dirty MPIO check script to add to the cron that will check MPIO status every minute and send an email alert should anything about MPIO change.
Best,
UG
I originally included this HOWTO guide as a reply to someone else's post but am posting in its own thread as it may help others who struggle to get proper iSCSI MPIO working on a Promox cluster. Coming from an enterprise VMware ESXi background, I wanted my shared storage setup the same way in Promox (albeit without LVM snapshots but Veeam fills this gap). It should be noted that this configuration is entirely done from shell and not through the web UI. I found that the GUI iSCSI storage setup does not work for MPIO as it only creates a single PVE NIC-to-iSCSI IP link which isn't helpful when the SAN has redundant IPs. The CLI process will create an MPIO that covers all PVE NICs-to-iSCSI IP links. The MPIO alias is then added to the PVE storage list (like a Datastore in ESXi). I also included a quick & dirty MPIO check script to add to the cron that will check MPIO status every minute and send an email alert should anything about MPIO change.
Best,
UG