Back when we were running PVE 4.3, I had received some advice not to run HA when running a hyper converged CEPH cluster because of the effects of Fencing and HA on a CEPH cluster. Now that we are running 5.1 and CEPH is now a first class citizen, I wanted to know if this is still a...
Hello,
i'm testing HA on a 4 node cluster Proxmox 5 last ver. ZFS local storage + replication.
When one node go down the vps is migrated correctly on the second node but when the failed node come back online i have this error on the failback:
task started by HA resource agent
2017-11-14...
Hi all.
Some backround on the environment first:
Four single CPU equipped DELL PowerEdge R620's.
Currently running two single ESXi nodes each with local storage and no HA or replication, one physical file server / Domain Controller and a physical backup server with tape drive.
Almost pure...
Hi,
4 HA Vms were not able to reboot properly this morning at 4:30 pm (reboot via cron).
These vms are running Centos 7 64 bits with qemu-guest-agent (hosted by different nodes).
Seems that my 3 nodes cluster is up to date :
proxmox-ve: 5.1-25 (running kernel: 4.13.4-1-pve)
pve-manager...
Hi
I know that's not officially supported, but I have no budget to spend on additional third server to provide HA
I want to prevent split-brain related problems with using bonded cross-connected (no switch between servers) network interfaces. I think that's well enough to prevent split-brain...
It would be neat to have a HA function on the gui to actually clear the error status of the given CT/VM.
HA have the bad tendency to put stuff in error when it had timeout, or generally holding some grudge against me. It'd do no harm: if it's still bad, it'll go into error again anyway...
Hi,
Creating an environment here is the details.
HP blade servers 3 node HA cluster
SAN iSCSI multipath shared storage
2 10gb NIC making them bond
my question is
is it enough to create cluster bond (2 10gb NIC) or is there any recommendations to avoid bottleneck/latency issues.
Thanks
Hi,
I work with several PVE Cluster set with 3-nodes + 1 DRBD9 pool.
But is it possible (and correct) to set a cluster of 4 nodes :
- PVE1 / PVE2 nodes with 1 DRBD pool
- PVE3 / PVE4 nodes with another DRBD pool
- two groups for HA (PVE1+PVE2 - PVE3+PVE4)
Is there any problem in this HA four...
Dear colleagues,
I moved to Proxmox 5 in a dev environment and was wondering how to setup the hardware watchdog.
On the same hardware running Proxmox 4, a kernel module ipmi_watchdog has been loaded.
Now I can only find the following modules.
lsmod |grep ipmi
ipmi_ssif 24576 0...
Hello,
I have a new cluster with 3 nodes with version 4.4 and trying to get my VM's migrated automatically if the node goes down.
I am using ceph as storage.
I simulated a kernel panic and my VM's didn't migrate.
It there something special I have to do to enable the automatic migration please...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.