ha cluster

  1. P

    Migrate VMs on outage without reboot

    we have a 3 nodes cluster which are correctly working with high availability, Last night, one of our nodes had an outage, so the HA-Manager migrated all VMs to another node as expected. But I noticed that all of our VMs get rebooted after migration, this caused some issue with VMs, since some of...
  2. K

    Proxmox Cluster,SAN Storage FC,Over-Commit Storage,How?

    Hello, I'm looking Solution PVE storage types. Setup Proxmox Cluster(HA) with SAN Storage Fibre Channel. I Setup LVM Storage types missing problem : Doesn't over-commit in the LVM space VM. I Setup LVM-THin Storage types missing problem : Doesn't Migration VM What should i use Storage...
  3. P

    Proxmox HA with CEPH

    Back when we were running PVE 4.3, I had received some advice not to run HA when running a hyper converged CEPH cluster because of the effects of Fencing and HA on a CEPH cluster. Now that we are running 5.1 and CEPH is now a first class citizen, I wanted to know if this is still a...
  4. Y

    Relocation error - ZFS HA

    Hello, i'm testing HA on a 4 node cluster Proxmox 5 last ver. ZFS local storage + replication. When one node go down the vps is migrated correctly on the second node but when the failed node come back online i have this error on the failback: task started by HA resource agent 2017-11-14...
  5. H

    Looking for advice: Three node HA cluster without shared storage? (Ceph?)

    Hi all. Some backround on the environment first: Four single CPU equipped DELL PowerEdge R620's. Currently running two single ESXi nodes each with local storage and no HA or replication, one physical file server / Domain Controller and a physical backup server with tape drive. Almost pure...
  6. TwiX

    Guest has not initialized the display (yet)

    Hi, 4 HA Vms were not able to reboot properly this morning at 4:30 pm (reboot via cron). These vms are running Centos 7 64 bits with qemu-guest-agent (hosted by different nodes). Seems that my 3 nodes cluster is up to date : proxmox-ve: 5.1-25 (running kernel: 4.13.4-1-pve) pve-manager...
  7. M

    Step-by-step two node HA setup for Proxmox 5.0

    Hi I know that's not officially supported, but I have no budget to spend on additional third server to provide HA I want to prevent split-brain related problems with using bonded cross-connected (no switch between servers) network interfaces. I think that's well enough to prevent split-brain...
  8. grin

    Clear HA error status from the GUI

    It would be neat to have a HA function on the gui to actually clear the error status of the given CT/VM. HA have the bad tendency to put stuff in error when it had timeout, or generally holding some grudge against me. It'd do no harm: if it's still bad, it'll go into error again anyway...
  9. R

    PVE 5 HA cluster with iSCSI multipath shared storage

    Hi, Creating an environment here is the details. HP blade servers 3 node HA cluster SAN iSCSI multipath shared storage 2 10gb NIC making them bond my question is is it enough to create cluster bond (2 10gb NIC) or is there any recommendations to avoid bottleneck/latency issues. Thanks
  10. Y

    HA with 4 nodes and 2 DRBD pools

    Hi, I work with several PVE Cluster set with 3-nodes + 1 DRBD9 pool. But is it possible (and correct) to set a cluster of 4 nodes : - PVE1 / PVE2 nodes with 1 DRBD pool - PVE3 / PVE4 nodes with another DRBD pool - two groups for HA (PVE1+PVE2 - PVE3+PVE4) Is there any problem in this HA four...
  11. C

    Hardware watchdog (ipmi_watchdog) on Proxmox 5

    Dear colleagues, I moved to Proxmox 5 in a dev environment and was wondering how to setup the hardware watchdog. On the same hardware running Proxmox 4, a kernel module ipmi_watchdog has been loaded. Now I can only find the following modules. lsmod |grep ipmi ipmi_ssif 24576 0...
  12. A

    [SOLVED] VM not migrated automatically

    Hello, I have a new cluster with 3 nodes with version 4.4 and trying to get my VM's migrated automatically if the node goes down. I am using ceph as storage. I simulated a kernel panic and my VM's didn't migrate. It there something special I have to do to enable the automatic migration please...