Currently, if two DCs are configured on a single 10G switch, the IP uses the same IP up to C class. I have configured HA between clusters and am asking how to configure HA between DCs for DR when a DC failure occurs. Is there a way to configure this? Does it exist? Currently, if two DCs are configured on a single 10G switch, the IP uses the same IP up to C class. I have configured HA between clusters and am asking how to configure HA between DCs for DR when a DC failure occurs. Is there a way to configure this? Does it exist? I know that vCenter supports HA and DR configuration for two DCs.There is no specific support for a two-data center configuration. At a high level, it is the same as a single DC configuration. That is if you have a single network space across the DCs and very low latency. It would also need to be a very reliable connection.
Depending on how you split the nodes in this stretched cluster, you may also want to have a quorum device in the 3rd datacenter.
Digging in deeper, there are generally two things that you can do with two DCs: High Availability (HA) and Disaster Recovery (DR).
Generally, HA means a faster, more transparent failover. However, you also need to have your data available in both DCs. Implementation of your Data HA is separate from VM HA.
DR is usually less transparent. You would also NOT want to have a single cluster for DR. Your goal here, usually, is to avoid dependencies across the link.
There are many nuances and "gotchas" for proper HA/DR setup.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
You addressed only one part of my reply: network accessibility and speed/latency.If it exists in the network, there are two DC on the 40G switch that exclusively use DR. Since the same IP exists as the C class, can I just configure it the same as the existing cluster HA?
Currently, the storage of all nodes is configured through external NFS, and external storage is also replicated through mutual mirroring.You addressed only one part of my reply: network accessibility and speed/latency.
It's impossible to give you a "yes" or "no" answer, as there are still many outstanding questions:
- node layout across DCs
- storage/data availability to facilitate the HA
- split-brain avoidance in case of a severed link
Possibly more. Note that these concerns are not Proxmox-specific. They are part of general infrastructure planning for implementing HA systems across Data Centers. If this is a production environment, you may want to reach out to a Proxmox Partner or Consultant.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
It consists of a master DC and a slave DC, and each DC consists of a single cluster. There are 3 nodes per dc.Can you clarify whether you are planning for:
- stretching single Proxmox cluster across two DCs?
- two Proxmox clusters, one per DC?
- transparent HA, ie live VM migration/failover across DCs?
- DR failover with non-zero RPO and RTO?
- is NFS mirror synchronous or asynchronous?
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Based on your clarification - there is no single click or native Wizard solution in PVE for what you want.It consists of a master DC and a slave DC, and each DC consists of a single cluster. There are 3 nodes per dc.
Configuration for inter-DC failover
RPO and RTO non-zero DR failover required
If a single cluster of the master DC fails, a structure is created where the slave DC acts as the master.