@rickygm,
I'll try your suggestion..
I'll be on-site again this week so i may implement it on one fo the four nodes, then if working, i'll replicate on the others!
I'll keep you all updated on test results!!
regards,
francesco
@Maurizio Marini, @rickygm
I've tested on one the four nodes of the cluster the multipath configuration building it from scratch (totally rewriting the multipath.conf file as stated on the wiki article, all according with v.4.x instructions) without success!!
@braddy33
i've read your notes...
guys,
actually no solution yet...
i'll try to look deeper at notes proposed by braddy31 in the next 2/3 days, then this thread will be updated with our results....
pls note i've seen some updates on the wiki page realted to multipath over iscsi configuration...
guys,
we're still looking around...
i've found this
http://thread.gmane.org/gmane.linux.pve.user/6175/focus=6177
seems multipath-tools package may be buggy?!?
you think this may be related?!?
waiting,
regards,
francesco
@Jamacdon_hwy97.com,
our storage is directly connected using fiber patches,
we don't have any switch between servers and storage....
for us, all was running fine prior to kernel updates....
we're still waiting some other suggestions,
our issue is still present...
regards,
francesco
@adamb
as you requested:
i've uploaded some files, you can find here the execution results of requested commands
node1 --> updated --> multipath not working as expected
node2 --> not updated yet --> multipath working fine!
here i've omitted the dry-run multipath command over the well working...
Caspar,
in my knowledge,
Gen 5 of HP Proliant....
has no builtin support for UEFI....
(i'm not sure if they've added in latest bios versions)
i've successfully installed 3.x versions on these machines (DL380 g5!) in the past but never tested it using v.4.x
i may mind you've already applied on...
Adam,
many thanks for your interest in our issue.....
i'm actually off-site....
i think Matteo (he's the IT manager at customer's site) willl answer you soon sending these two infos (from node1 that was already updated, and from node2 not yet updated!)
i confirm that:
we're presenting to the...
guys,
we're still in trouble here!!!
anyone is looking for a solution or is able to help us in focusing origin of this issue?!?
i'm looking around over the web without being able to find a rapid solution,
seems someone is having similar issues on other envis...
Jandro,
replaced the multipath.conf row as you specified....
service multipath-tools restart
here a syslog dump:
Jun 08 16:54:54 piva-pve1 multipath-tools[902]: Stopping multipath daemon: multipathd.
Jun 08 16:54:54 piva-pve1 multipathd[9037]: --------shut down-------
Jun 08 16:54:54...
Hi Jandro,
thanks for your attention!
well,
yes,
i've checked it,
i've not changed the file since restart.....
here is the multipath.conf dump:
blacklist {
wwid .*
}
blacklist_exceptions {
wwid "3600c0ff000271f5d02ff105701000000"
wwid "3600c0ff00027217d28ff105701000000"
}...
hi everybody,
we've a 3 nodes cluster running on v.4.x using a shared storage base on HP MSA2040FC...
our issue have started after update of 1 node....
we've done upgrades to packages using the enterprise repository (on june 7th 2016!)
this upgrades has touched kernel and a lot of...
guys,
as you suggested...
we've added an APC 7902 device as additional fencing...
now all is running fine....
many thanks again for your help!
regards,
francesco
ok,
many thanks,
i'll evaluate with customer...
if they've an old server with ability to run proxmox we'll upgrade to v.4x and leave the third (older) node as "minor node" just to achieve HA and quorum requirements....
if this option will not be available we'll install an apc managed power...
yes we're using ilo,
it's the only fencing device we've on this configuration!
may you suggest me a known working apc device model be used for this config?!?
thanks
Francesco
guys,
i've installed a two node + quorum disk cluster following wiki articles
all is built on two hp proliand dl360 gen 9 servers + shared hp msa 2040 sas 6g storage box with multipath
quorum is provided by an iscsi target placed on a qnap nas...
all seems working fine,
tests operated with...
Adam,
made some more testing...
without success....
actually the units are not running in production env but still in our lab for preconfiguration so......
i've choosen to completely rebuild the units and re-do all configs (from os to storage, multipath, fencing cluster and so on....)
now...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.