Sorry,
not tested yet...
but sincerely if you report this....
I'll avoid any kind of updates!
we've already saved a risky situation moving with extereme caution on a minefield!
doing something defined as crazy (not officially supported!) using extreme caution and finally reaching stable...
well,
we've moved 1 unit by type to new configuration!
(actually 3 nodes v.5.4 / 3 nodes v.6.1)
I can confirm,
QLogic HBAs are working fine with Proxmox v.6.1,
the new kernel seems to run fine, actually moving all VMs, 1 by 1 from old to new environment,
just doing as i sayd before...
guys,
moved some less relevant VMs from old to new installation (actually 1 node on v.6.1, 5 nodes om v.5.4)
all seems running fine,
no issues....
obviously, we're moving with extreme caution operating on VMs,
transferring them from less to more relevant one....
if no issue during weekend...
@Stoiko Ivanov
going on with tests...
today I've moved all VMs away from one of the node (Proliant DL380 G6)
the node was removed from actual cluster, made a full clean-up and fresh installation using latest Proxmox 6.1 iso....
with this kernel seems qlogic qla2xxx cards are correctly...
@Stoiko Ivanov
actually we've used all storage space we have for this cluster,
we're not able to split storage in more LUNs...
sure i agree that,
by design, only one cluster may access the shared storage filesystem!
inside scenario I'm minding on....
when cluster structure will be splitted...
guys,
still searching on google for similar issues.....
seems more than one linux distro is experiencing issues with qla2xxx adapters
searching with these terms 'kernel qla2xxx lun issue' and limiting search to last month....
finds a lot of results...
before we take any new step....
two big...
Stoiko,
thanks for your answer,
as you suggested,
moving this production environment from 5.4 to actual 6.x is one of the alternatives we're actually evaluating...
the cluster is actually built on 6 nodes, all HPE servers, with MSA2040 FC storage and full mesh connection using 2x Brocade 300...
Hi all,
done today a bit of maintenance on 6 nodes cluster running from year at customer's datacenter!
noticed that after upgrading a node to 'top' Proxmox 5.4 available kernel (v.4.15.18-26)
we've loosed connection to all luns located on HP MSA 2040 (these are FC connected in ful mesh mode...
guys,
we've setup two more nodes....
these two are identycal DL680 gen8.
done fresh install using downloaded .iso image,
operated all updates (on these units we've an active subscription!)
then set units as cluster members ad after some "blank" runtime days,
moved some VMs to the new nodes...
@dcspack
made more testing just this morning,
only thing i've done different, i've created a new ha group containing all 6 nodes
yesterday i was adding node 5 & 6 to the existing 'ha_pool' group
now 'live' migration is working fine in HA mode too...
really i don't know if the action i've...
we've just added 2 nodes to an existing 4 nodes proxmox 5.2 cluster
these 2 nodes have been cleaned up and freshly installed from .iso....
after cluster join,
the status results as shown in the attached image:
last two has lrm 'idle'
migrating a powered off or non HA enabled VM is working...
guys,
this thread may be important!!
so if one of you german mothertoungue forum members,
may translate it in english and make forum admins move to the 'global' forum?!?
i've spent many hours looking around for a solution!!
without finding it because i was searching in english....
many...
@Alwin
here the results:
(bios relase date is 02/22/2018!!)
# dmidecode 3.0
Getting SMBIOS data from sysfs.
SMBIOS 2.7 present.
133 structures occupying 4124 bytes.
Table at 0xDF7FE000.
Handle 0x0000, DMI type 0, 24 bytes
BIOS Information
Vendor: HP
Version: P62...
Actually unable to retrieve bios version
i really can't reboot any node,
hoping to give you feedback about this value on our 4x dl380 g6 ASAP!!
regards,
Francesco
due to our working needs,
we've rolled back to previous kernel (4.13 )
now all seems to operate correctly,
we've freezed updates to prevent 4.15 kernels to be installed on nodes....
actually we've not experienced any system crash under heavy load
(like it was before rollback action!)
many...
forum admins!!!
may be this is related with:
https://forum.proxmox.com/threads/4-15-based-test-kernel-for-pve-5-x-available.42097/page-7
i'm going to crosslink these topics!!
so we all may look around!!
regards,
Francesco
forum admins!!!
may be this is related with:
https://forum.proxmox.com/threads/4-15-17-kernel-panic.44714/#post-214424
i'm going to crosslink these topics!!
so we all may look around!!
regards,
Francesco
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.