Search results

  1. D

    Live migration issues 4.1 + Ceph

    Hi, i had the same problems(live migration) when i had nodes on differents versions.
  2. D

    lxc cluster with HA

    Hi, are you running on the same versions on all nodes? Can you post your output from pveversion -v I had a similar problems on lxc too(perls error) and i solved with upgrade.
  3. D

    HP proliant reboots

    i had bad experiences with HA and HP, the truth. hpwdt is on blacklist.conf and im working with softdog on all machines, but only Hp has reboots :S(every 16 days more or less). i will test if it happens again or not. Thanks for the help! :D
  4. D

    HP proliant reboots

    nmi watchdog was enabled with hpwdt module Thanks
  5. D

    HP proliant reboots

    on proxmox 4.0 we had kernel panic with hpwdt module. =S ok we will test this, mmm this is the main problem , on our problem with reboots, Ok Thank you for your interested Spirit, we will check this, Thanks! :D
  6. D

    HP proliant reboots

    im working with HA, it´s the problem :(
  7. D

    HP proliant reboots

    yes, smh is installed from here : http://downloads.linux.hp.com/SDR/project/mcp/ could i disable wathdog-mux for solve the problem?
  8. D

    HP proliant reboots

    Yes proxmox-ve: 4.1-26 (running kernel: 4.2.6-1-pve) pve-manager: 4.1-1 (running version: 4.1-1/2f9650d4) pve-kernel-4.2.6-1-pve: 4.2.6-26 lvm2: 2.02.116-pve2 corosync-pve: 2.3.5-2 libqb0: 0.17.2-1 pve-cluster: 4.0-29 qemu-server: 4.0-41 pve-firmware: 1.1-7 libpve-common-perl: 4.0-41...
  9. D

    HP proliant reboots

    Hi, Thanks for reply dietmar. on log: # client watchdog expired - disable watchdog updates And then the machine rebooted automatically
  10. D

    cluster config '/etc/pve/corosync.conf' already exists

    mmm, i don´t know :S i think that you tried create a cluster node but, this crashed now, but i don´t know exactly your problem
  11. D

    HP proliant reboots

    Hi, we have a cluster with 4 nodes(2 HP, 2 Machines) they was uptime from 16 days, but they rebooted by proxmox watchdog(Only Machines HP). Could we disable this feauture?or any idea that how we can solved this. Thanks and sorry for my english!
  12. D

    cluster config '/etc/pve/corosync.conf' already exists

    i see bad file on corosync.conf (you should see all nodes on your cluster) the procedure is : node1# pvecm create YOUR-CLUSTER-NAME node2# pvecm add IP-ADDRESS-CLUSTER can you explain your cluster configuration?(number nodes and if they work or not the rest) More info...
  13. D

    cluster config '/etc/pve/corosync.conf' already exists

    Hi, Could post your result from pveversion -v
  14. D

    cluster config '/etc/pve/corosync.conf' already exists

    try first: systemctl restart corosync systemctl restart pve-cluster if this no work. Try: apt-get install --reinstall corosync-pve Good luck! :D
  15. D

    cluster config '/etc/pve/corosync.conf' already exists

    see this. i think that it´s the same problem. https://forum.proxmox.com/threads/cluster-is-down.25257/#post-126535 Good luck ! :D
  16. D

    ha-manager bug?

    Hi, solved, i deleted the last } and it´s working good again! I did not edit the file before. =S
  17. D

    ha-manager bug?

    should we do apt-get dist-upgrade Or what package? Thanks!
  18. D

    ha-manager bug?

    proxmox-ve: 4.1-26 (running kernel: 4.2.6-1-pve) pve-manager: 4.1-1 (running version: 4.1-1/2f9650d4) pve-kernel-4.2.6-1-pve: 4.2.6-26 lvm2: 2.02.116-pve2 corosync-pve: 2.3.5-2 libqb0: 0.17.2-1 pve-cluster: 4.0-29 qemu-server: 4.0-41 pve-firmware: 1.1-7 libpve-common-perl: 4.0-41...
  19. D

    ha-manager bug?

    Hi guys! when i run ha-manager status the response is "garbage after JSON object, at character offset 529 (before "}")" https://gyazo.com/7a730418370b61c53451f121bbe87d83 Thanks!