I Try to choose between this two solutions, VM's on Nas-ha or Vm's on CDA (cloud array disk or ceph )
Actually we have some cluser only with NAs-ha that run fine but somme time (one ou two time per years there are network failure during some seconds and Vm lose there disk and we must...
I try to setup a virtual sophos as firewall for my VM in a private vlan et also give access to remote user/site using vpn
eno1 -> vmbr0 (public IP as management on ovh infrastructure)
eno2 (connected to the vrack service in ovh)
vmbr1 -> en02
vmbr2 - > eno2.100 (private lan...
I must remove an old node witch is the master for HA
How to transfer this role to another node ?
Simply stop the odl node an remove it from the cluster and the system select a new one for master role ?
Or as I have read in the forum :
use the following ?
force a node to a master: pveca -m...
I had also the same issue.
All nodes in /etc/hosts with correct IP (for me public IP with lets'encrypt certificates and private IP for the quorum in a dedicated vlan)
using short name to join the cluster failed
using FQDN resolve the issue and my new node si now part of the cluster.
Hi, I have the same issue on a new fresh server trying to update from V6 to V7 (imag template in V§ fromOVH run fine but when I upgrade to Vè I encounter this error also.
Vhere in IPMI do you change the boot order ?
I read some post and It's seams that adding a new node in V6 to a cluster is not possible.
in a post
And also We only really support PVE 5 and 6 nodes coexisting during the upgrade of a whole cluster, and then...
Thanks for your response.
Corosync was succefuly updated to V3 et status is fine.
I'll try to add a new server (proxmox-ve: 6.4-1 (running kernel: 5.4.157-1-pve) to the cluster running proxmox-ve: 5.4-2 (running kernel: 4.15.18-26-pve)
In my case I have public IP on on interface with the...
Hi, I know is late but I have a cluster with 3 nodes to upgrade to v6 version form last v5 version.
I'm followi the tutorial here https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0#In-place_upgrade
I have to update first all package in the v5 version
than upgrade on each nodes the corosync...
In the past I use qnetd as quorum device when I run only 2 nodes.
For 3 years we have 3 nodes configured with corosync for the quorum and I regulary add or removes nodes to upgrade hardward and keep up to date for performance.
To verify my config and also prepare to migrate from v5 to v6...
Thanks, it's seams that this was the issue . After the 5 reboot of all nodes I had removed the HA configuration an effectively no new reboot occures.
But why fencing of non-quorate nodes do a reboot of all nodes ?
Now I also investigates at the provider why lost or low of connectivity occures...