Hello guys , i have a problem with Fiber channel storage multipath setting .
I have a 3 pve node in cluster. I add the Fiber channel storage (SAN) as share storage with (Share-LVM) for live migration and High availability and I setup 2 LUN with both node . Those LUN are different Tiers storage with SSD and HDD.
The Storage LUN can mount to both node. However, the multipath (MPIO) cannot success to run round robin load balancing for no down time failover with Fiber channel storage and SAN switch. And now, It only run as active / standby (active ready running / active ghost running) , So if run no down time failover with SAN normally is (active ready running / active ready running). How to solve this issues, the configuration file in the attachment and below.
WWID information in the below:
HDD Tier: 36e0cc7a10010a94c0008fad000000002
SSD Tier: 36e0cc7a10010a94c1213fed600000004
The /etc/multipath.conf in the below:
root@TKO01:~# cat /etc/multipath.conf
defaults {
polling_interval 2
path_selector "round-robin 0"
path_grouping_policy multibus
uid_attribute ID_SERIAL
rr_min_io 100
failback immediate
no_path_retry queue
user_friendly_names yes
}
blacklist {
wwid .*
}
blacklist_exceptions {
wwid "36e0cc7a10010a94c0008fad000000002"
wwid "36e0cc7a10010a94c1213fed600000004"
}
multipaths {
multipath {
wwid "36e0cc7a10010a94c0008fad000000002"
alias mpath000
}
multipath {
wwid "36e0cc7a10010a94c1213fed600000004"
alias mpath001
}
}
Hardware details:
SAN Storage model: Huawei Oceanstor 5500 V5
HBA Card: Emulex,FC HBA,16Gb(LPe16002B),2-Port,SFP+
Thanks
I have a 3 pve node in cluster. I add the Fiber channel storage (SAN) as share storage with (Share-LVM) for live migration and High availability and I setup 2 LUN with both node . Those LUN are different Tiers storage with SSD and HDD.
The Storage LUN can mount to both node. However, the multipath (MPIO) cannot success to run round robin load balancing for no down time failover with Fiber channel storage and SAN switch. And now, It only run as active / standby (active ready running / active ghost running) , So if run no down time failover with SAN normally is (active ready running / active ready running). How to solve this issues, the configuration file in the attachment and below.
WWID information in the below:
HDD Tier: 36e0cc7a10010a94c0008fad000000002
SSD Tier: 36e0cc7a10010a94c1213fed600000004
The /etc/multipath.conf in the below:
root@TKO01:~# cat /etc/multipath.conf
defaults {
polling_interval 2
path_selector "round-robin 0"
path_grouping_policy multibus
uid_attribute ID_SERIAL
rr_min_io 100
failback immediate
no_path_retry queue
user_friendly_names yes
}
blacklist {
wwid .*
}
blacklist_exceptions {
wwid "36e0cc7a10010a94c0008fad000000002"
wwid "36e0cc7a10010a94c1213fed600000004"
}
multipaths {
multipath {
wwid "36e0cc7a10010a94c0008fad000000002"
alias mpath000
}
multipath {
wwid "36e0cc7a10010a94c1213fed600000004"
alias mpath001
}
}
Hardware details:
SAN Storage model: Huawei Oceanstor 5500 V5
HBA Card: Emulex,FC HBA,16Gb(LPe16002B),2-Port,SFP+
Thanks