Is ha configuration between data centers possible in proxmox 8.x version?

hwseon

New Member
May 20, 2024
4
0
1
Is HA configuration supported between proxmox data centers? Currently, I would like to configure 2 data centers so that even if the cluster is broken, service can be provided from another data center.
 
There is no specific support for a two-data center configuration. At a high level, it is the same as a single DC configuration. That is if you have a single network space across the DCs and very low latency. It would also need to be a very reliable connection.
Depending on how you split the nodes in this stretched cluster, you may also want to have a quorum device in the 3rd datacenter.

Digging in deeper, there are generally two things that you can do with two DCs: High Availability (HA) and Disaster Recovery (DR).
Generally, HA means a faster, more transparent failover. However, you also need to have your data available in both DCs. Implementation of your Data HA is separate from VM HA.
DR is usually less transparent. You would also NOT want to have a single cluster for DR. Your goal here, usually, is to avoid dependencies across the link.

There are many nuances and "gotchas" for proper HA/DR setup.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
There is no specific support for a two-data center configuration. At a high level, it is the same as a single DC configuration. That is if you have a single network space across the DCs and very low latency. It would also need to be a very reliable connection.
Depending on how you split the nodes in this stretched cluster, you may also want to have a quorum device in the 3rd datacenter.

Digging in deeper, there are generally two things that you can do with two DCs: High Availability (HA) and Disaster Recovery (DR).
Generally, HA means a faster, more transparent failover. However, you also need to have your data available in both DCs. Implementation of your Data HA is separate from VM HA.
DR is usually less transparent. You would also NOT want to have a single cluster for DR. Your goal here, usually, is to avoid dependencies across the link.

There are many nuances and "gotchas" for proper HA/DR setup.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Currently, if two DCs are configured on a single 10G switch, the IP uses the same IP up to C class. I have configured HA between clusters and am asking how to configure HA between DCs for DR when a DC failure occurs. Is there a way to configure this? Does it exist? Currently, if two DCs are configured on a single 10G switch, the IP uses the same IP up to C class. I have configured HA between clusters and am asking how to configure HA between DCs for DR when a DC failure occurs. Is there a way to configure this? Does it exist? I know that vCenter supports HA and DR configuration for two DCs.
 
Last edited:
If it exists in the network, there are two DC on the 40G switch that exclusively use DR. Since the same IP exists as the C class, can I just configure it the same as the existing cluster HA?
You addressed only one part of my reply: network accessibility and speed/latency.

It's impossible to give you a "yes" or "no" answer, as there are still many outstanding questions:
- node layout across DCs
- storage/data availability to facilitate the HA
- split-brain avoidance in case of a severed link

Possibly more. Note that these concerns are not Proxmox-specific. They are part of general infrastructure planning for implementing HA systems across Data Centers. If this is a production environment, you may want to reach out to a Proxmox Partner or Consultant.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
You addressed only one part of my reply: network accessibility and speed/latency.

It's impossible to give you a "yes" or "no" answer, as there are still many outstanding questions:
- node layout across DCs
- storage/data availability to facilitate the HA
- split-brain avoidance in case of a severed link

Possibly more. Note that these concerns are not Proxmox-specific. They are part of general infrastructure planning for implementing HA systems across Data Centers. If this is a production environment, you may want to reach out to a Proxmox Partner or Consultant.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Currently, the storage of all nodes is configured through external NFS, and external storage is also replicated through mutual mirroring.
The network is also integrated with multiple configurations, with automatic switching and restoration in case of network disconnection.
 
Last edited:
Can you clarify whether you are planning for:
- stretching single Proxmox cluster across two DCs?
- two Proxmox clusters, one per DC?
- transparent HA, ie live VM migration/failover across DCs?
- DR failover with non-zero RPO and RTO?
- is NFS mirror synchronous or asynchronous?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Can you clarify whether you are planning for:
- stretching single Proxmox cluster across two DCs?
- two Proxmox clusters, one per DC?
- transparent HA, ie live VM migration/failover across DCs?
- DR failover with non-zero RPO and RTO?
- is NFS mirror synchronous or asynchronous?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
It consists of a master DC and a slave DC, and each DC consists of a single cluster. There are 3 nodes per dc.

Configuration for inter-DC failover

RPO and RTO non-zero DR failover required
If a single cluster of the master DC fails, a structure is created where the slave DC acts as the master.
 
It consists of a master DC and a slave DC, and each DC consists of a single cluster. There are 3 nodes per dc.

Configuration for inter-DC failover

RPO and RTO non-zero DR failover required
If a single cluster of the master DC fails, a structure is created where the slave DC acts as the master.
Based on your clarification - there is no single click or native Wizard solution in PVE for what you want.
Given that your storage is already replicated out-of-band of PVE, you need to create a mechanism to sync VM configuration across the link.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!