HI.
Recently we move to a DC provider that give us a PURE FlashArray and Hitachi Open-V storage solution.
Both FC!
After configure it with LVM and ZFS as well, we found out poor performance, that not match our expectation.
The first result is with LVM using Hitch Open-V
This one is using Pure Flash
As you can see, there is a difference with IOPS and speed.
I really dont know what is goinf on.
This is the multipath configuration. Don't know if there is some optimization to do or not!
/etc/multipath.conf
defaults {
user_friendly_names yes
polling_interval 2
path_selector "service-time 0"
path_grouping_policy multibus
path_checker readsector0
getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n"
rr_min_io 100
failback immediate
no_path_retry queue
}
blacklist {
wwid .*
}
blacklist_exceptions {
wwid "360060e8012bdd2005040bdd200000001"
}
multipaths {
multipath {
wwid "360060e8012bdd2005040bdd200000001"
alias LUN0
}
}
Thanks for any help!
Recently we move to a DC provider that give us a PURE FlashArray and Hitachi Open-V storage solution.
Both FC!
After configure it with LVM and ZFS as well, we found out poor performance, that not match our expectation.
The first result is with LVM using Hitch Open-V
This one is using Pure Flash
As you can see, there is a difference with IOPS and speed.
I really dont know what is goinf on.
This is the multipath configuration. Don't know if there is some optimization to do or not!
/etc/multipath.conf
defaults {
user_friendly_names yes
polling_interval 2
path_selector "service-time 0"
path_grouping_policy multibus
path_checker readsector0
getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n"
rr_min_io 100
failback immediate
no_path_retry queue
}
blacklist {
wwid .*
}
blacklist_exceptions {
wwid "360060e8012bdd2005040bdd200000001"
}
multipaths {
multipath {
wwid "360060e8012bdd2005040bdd200000001"
alias LUN0
}
}
Thanks for any help!