Hi all,
done today a bit of maintenance on 6 nodes cluster running from year at customer's datacenter!
noticed that after upgrading a node to 'top' Proxmox 5.4 available kernel (v.4.15.18-26)
we've loosed connection to all luns located on HP MSA 2040 (these are FC connected in ful mesh mode...
we've just added 2 nodes to an existing 4 nodes proxmox 5.2 cluster
these 2 nodes have been cleaned up and freshly installed from .iso....
after cluster join,
the status results as shown in the attached image:
last two has lrm 'idle'
migrating a powered off or non HA enabled VM is working...
hi everybody,
we've a 3 nodes cluster running on v.4.x using a shared storage base on HP MSA2040FC...
our issue have started after update of 1 node....
we've done upgrades to packages using the enterprise repository (on june 7th 2016!)
this upgrades has touched kernel and a lot of...
guys,
i've installed a two node + quorum disk cluster following wiki articles
all is built on two hp proliand dl360 gen 9 servers + shared hp msa 2040 sas 6g storage box with multipath
quorum is provided by an iscsi target placed on a qnap nas...
all seems working fine,
tests operated with...
guys,
i'm going to complete configuration using 2 hp dl380 gen9 servers with sas storage and multipath (thanks to adamb for some suggestions on multipathing over sas!)
now testing fencing with poor results....
my fencing settings for nodes are as follows:
name="ipmi_fence_node2"...
guys,
i'm going to reuse an hp packaged cluster based on 2x proliant dl380 g4 servers + scsi msa500 g2 storage to create aproxmox 3.1 ha cluster.....
i'm in doubt on how to setup the shared storage (i suppose the units will correctly see the msa500 directly attached with redundant scsi...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.