Search results

  1. C

    Proxmox v.5.4 vs QLogic FC HBA

    Sorry, not tested yet... but sincerely if you report this.... I'll avoid any kind of updates! we've already saved a risky situation moving with extereme caution on a minefield! doing something defined as crazy (not officially supported!) using extreme caution and finally reaching stable...
  2. C

    Proxmox v.5.4 vs QLogic FC HBA

    guys, cluster migration completed..... now all nodes running on v.6.1 no issues at all! regards, Francesco
  3. C

    Proxmox v.5.4 vs QLogic FC HBA

    well, we've moved 1 unit by type to new configuration! (actually 3 nodes v.5.4 / 3 nodes v.6.1) I can confirm, QLogic HBAs are working fine with Proxmox v.6.1, the new kernel seems to run fine, actually moving all VMs, 1 by 1 from old to new environment, just doing as i sayd before...
  4. C

    Proxmox v.5.4 vs QLogic FC HBA

    guys, moved some less relevant VMs from old to new installation (actually 1 node on v.6.1, 5 nodes om v.5.4) all seems running fine, no issues.... obviously, we're moving with extreme caution operating on VMs, transferring them from less to more relevant one.... if no issue during weekend...
  5. C

    Proxmox v.5.4 vs QLogic FC HBA

    @Stoiko Ivanov going on with tests... today I've moved all VMs away from one of the node (Proliant DL380 G6) the node was removed from actual cluster, made a full clean-up and fresh installation using latest Proxmox 6.1 iso.... with this kernel seems qlogic qla2xxx cards are correctly...
  6. C

    Proxmox v.5.4 vs QLogic FC HBA

    @Stoiko Ivanov actually we've used all storage space we have for this cluster, we're not able to split storage in more LUNs... sure i agree that, by design, only one cluster may access the shared storage filesystem! inside scenario I'm minding on.... when cluster structure will be splitted...
  7. C

    Proxmox v.5.4 vs QLogic FC HBA

    guys, still searching on google for similar issues..... seems more than one linux distro is experiencing issues with qla2xxx adapters searching with these terms 'kernel qla2xxx lun issue' and limiting search to last month.... finds a lot of results... before we take any new step.... two big...
  8. C

    Proxmox v.5.4 vs QLogic FC HBA

    Stoiko, thanks for your answer, as you suggested, moving this production environment from 5.4 to actual 6.x is one of the alternatives we're actually evaluating... the cluster is actually built on 6 nodes, all HPE servers, with MSA2040 FC storage and full mesh connection using 2x Brocade 300...
  9. C

    Proxmox v.5.4 vs QLogic FC HBA

    Hi all, done today a bit of maintenance on 6 nodes cluster running from year at customer's datacenter! noticed that after upgrading a node to 'top' Proxmox 5.4 available kernel (v.4.15.18-26) we've loosed connection to all luns located on HP MSA 2040 (these are FC connected in ful mesh mode...
  10. C

    4.15 based test kernel for PVE 5.x available

    guys, we've setup two more nodes.... these two are identycal DL680 gen8. done fresh install using downloaded .iso image, operated all updates (on these units we've an active subscription!) then set units as cluster members ad after some "blank" runtime days, moved some VMs to the new nodes...
  11. C

    nodes added to proxmox 5.2 cluster appears 'idle'

    @dcspack made more testing just this morning, only thing i've done different, i've created a new ha group containing all 6 nodes yesterday i was adding node 5 & 6 to the existing 'ha_pool' group now 'live' migration is working fine in HA mode too... really i don't know if the action i've...
  12. C

    nodes added to proxmox 5.2 cluster appears 'idle'

    we've just added 2 nodes to an existing 4 nodes proxmox 5.2 cluster these 2 nodes have been cleaned up and freshly installed from .iso.... after cluster join, the status results as shown in the attached image: last two has lrm 'idle' migrating a powered off or non HA enabled VM is working...
  13. C

    Proxmox 5.1 Wiki passthrough tape iscsi (solved)

    guys, this thread may be important!! so if one of you german mothertoungue forum members, may translate it in english and make forum admins move to the 'global' forum?!? i've spent many hours looking around for a solution!! without finding it because i was searching in english.... many...
  14. C

    4.15 based test kernel for PVE 5.x available

    @Alwin here the results: (bios relase date is 02/22/2018!!) # dmidecode 3.0 Getting SMBIOS data from sysfs. SMBIOS 2.7 present. 133 structures occupying 4124 bytes. Table at 0xDF7FE000. Handle 0x0000, DMI type 0, 24 bytes BIOS Information Vendor: HP Version: P62...
  15. C

    4.15 based test kernel for PVE 5.x available

    Actually unable to retrieve bios version i really can't reboot any node, hoping to give you feedback about this value on our 4x dl380 g6 ASAP!! regards, Francesco
  16. C

    4.15 based test kernel for PVE 5.x available

    Menno, we're working HP DL380 G6 too.... we solved rolling back to 4.13.xx kernel many thanks again for your time, regards, Francesco
  17. C

    4.15.17 kernel panic

    Guillaume, same here! using 4.13 kernel no issues!! actually working using this environment!
  18. C

    4.15.17 kernel panic

    due to our working needs, we've rolled back to previous kernel (4.13 ) now all seems to operate correctly, we've freezed updates to prevent 4.15 kernels to be installed on nodes.... actually we've not experienced any system crash under heavy load (like it was before rollback action!) many...
  19. C

    4.15.17 kernel panic

    forum admins!!! may be this is related with: https://forum.proxmox.com/threads/4-15-based-test-kernel-for-pve-5-x-available.42097/page-7 i'm going to crosslink these topics!! so we all may look around!! regards, Francesco
  20. C

    4.15 based test kernel for PVE 5.x available

    forum admins!!! may be this is related with: https://forum.proxmox.com/threads/4-15-17-kernel-panic.44714/#post-214424 i'm going to crosslink these topics!! so we all may look around!! regards, Francesco

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!