I am running PVE cluster over WAN (different datacenters across the globe). It worked all the time flawlessly and best suited my needs (of course no shared storage, LM or HA but still central management, easy offline migrations etc). Some time ago I've upgraded to PVE 6.0 and was able to run the...
one of my projects uses a Proxmox unicast cluster at Hetzner. A few month ago Hetzner introduced vSwitches and from my first impression they work very well. Meanwhile all cluster nodes have a bridged interface on a shared vSwitch which is also already used for all VMs. On this interfaces...
Is there a way to configure the pvecm without using multicast?
Is there a way using pvecm create cluster to force to use the internal, and not the public IP Address?
We have proxmox nodes which own a public IP Address which automatically is used for cluster-management.
I was able to run a...
I have an cluster running in multicast mode and the provider stopped supporting multicast - turned off during switch firmware upgrade and they don't want to enable it again.
Is there an procedure to switch from multicast to unicast without braking things and possibly no or minimum...
i'm facing this problem on my server running proxmox, the server goes down often and if i go to /var/log/dmesg of my lxc container i see this:
[96778.325439] IPv6: ADDRCONF(NETDEV_UP): veth102i0: link is not ready
[96778.595101] fwbr102i0: port 1(fwln102i0) entered disabled state...
I'm writing this short article because I spend myself a lot of time by finding the right configuration to get this working.
First of all: A 2 Node Cluster is not a very good way to create High-Availability or even Fail-over scenarios because corosync is not able to create a...
Hoping someone would be able to assist,
I'm trying to configure a 3 node cluster, 2 nodes on the same physical network, and another node remotely.
this other node sits behind a Pfsense firewall that has a site-to-site vpn connection to my pfsense router.
i'm able to cluster the first two...
We are having trouble with a 3 nodes cluster that is rebooting every node simultaneously every few days.
We are using OVH servers, so only unicast is available, OVH vracks being more expensive and we dont need more than 3 nodes.
We are using software RAID with SSD drives and our VMs...
we have cluster configuration which consists in a 2 nodes - unicast udpu (multicast not available).
The cluster is up and running, I added and removed a test machine from HA.
The question is, we have the following output for "ha-manager status", on this cluster, two node up and running...
I'm getting mad at trying to configure this.
After my last resinstall of the 3 nodes I did the following:
(sorry, for legacy reasons my nodes are called mynode0, mynode3 and mynode4)
- each machine has 2 eth network connection, a public one and a "private" one, which is in fact on a...