multipath does not work after reboot

jamacdon_hwy97.com

New Member
Apr 29, 2016
2
0
1
53
I have setup 3 hosts running and have setup LVM storage on top of iSCSI.

When initially installed everything worked great and multipath was active. Under testing (pull a cable) a failed path was identified as faulty almost immediately and once plugged back in the path was active.

The problem is, once a host is rebooted there is no multipath paths active. The iSCSI connections do exist however we cannot get the multipath to come online. multipath -ll shows nothing.

Here is what we get on a good working host:
multipath -ll
14f504e46494c45524d43584130612d334c30302d6d397869 dm-3 ,
size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- #:#:#:# sdb 8:16 active undef running
`- #:#:#:# sdc 8:32 active undef running
14f504e46494c45524f48367a5a512d59466d492d706c4174 dm-4 ,
size=512G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- #:#:#:# sdf 8:80 active undef running
`- #:#:#:# sdg 8:96 active undef running
14f504e46494c455279454d4764582d39376d6a2d31543031 dm-5 ,
size=1000G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- #:#:#:# sdd 8:48 active undef running
`- #:#:#:# sde 8:64 active undef running

iscsiadm -m session
tcp: [1] 192.168.247.11:3260,1 iqn.2006-01.com.openfiler:tsn.d47af54d13f3 (non-flash)
tcp: [2] 10.10.1.11:3260,1 iqn.2006-01.com.openfiler:tsn.d47af54d13f3 (non-flash)
tcp: [3] 192.168.247.11:3260,1 iqn.2006-01.com.openfiler:tsn.opftarget2 (non-flash)
tcp: [4] 10.10.1.11:3260,1 iqn.2006-01.com.openfiler:tsn.opftarget2 (non-flash)

The only way to get multipath to work again is to delete the folders /etc/iscsi/nodes and /etc/iscsi/send_targets and then reboot and it seems to work.

It also does not seem to matter if we have a /etc/multipath.conf file in place or not. The one we have tried looks like this:

defaults {
polling_interval 2
path_selector "round-robin 0"
path_grouping_policy multibus
getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n"
rr_min_io 100
failback immediate
no_path_retry queue
}


blacklist_exceptions {
wwid "14f504e46494c455279454d4764582d39376d6a2d31543031"
wwid "14f504e46494c45524d43584130612d334c30302d6d397869"
wwid "14f504e46494c45524f48367a5a512d59466d492d706c4174"
}

multipaths {
multipath {
wwid "14f504e46494c455279454d4764582d39376d6a2d31543031"
}
multipath {
wwid "14f504e46494c45524d43584130612d334c30302d6d397869"
}
multipath {
wwid "14f504e46494c45524f48367a5a512d59466d492d706c4174"
}
}
#EOF

The iSCSI server is openfiler which we are using for our in house testing. When multipath is loaded properly it works great. We need to have confidence in the multipath system before we can put this into production. At that time we would implement a the system with a DELL MD3620i as the iSCSI target.

Any help here to get this working consistent would be much appreciated. We have experienced that same thing with 4.1 and 4.2.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!