4 Node Cluster Quorum - corosync options

oodissimo

Active Member
Nov 30, 2017
9
1
43
59
I am looking at building a 4 node cluster spread over 2 physical locations. The locations are connected by fiber. Each location will have two hosts to provide a node to manually migrate to for maintenance and also quick recovery from hardware failure. This requires replication between the node pairs at each location. Due to the need for some USB pass through devices, HA isn't practical.

The obvious options are a single 4 node cluster (benefits of a single pane of glass) or two independently managed clusters of 2 nodes each. From a sizing perspective, a single node could handle everything if it wasn't for the USB devices that are specific to each location and cannot be moved.

I've been looking at various corosync configuration options, but I am not sure which ones are supported or recommended. I've come up with the following options:
  • A Qdevice for each cluster of 2 nodes. Is a Qdevice needed for a 4 node cluster?
  • Two node cluster configurations with wait_for_all to prevent split brain during startup but should allow shutting down one node for maintenance in a two node cluster.
    Code:
    quorum {    provider: corosync_votequorum
        two_node: 1
        wait_for_all: 1
    }
  • Use of last_man_standing to allow a single node to remain operational and manageable in a 4 node cluster
  • Use of quorum_votes to allow a single node in 2 node cluster or 4 node cluster to remain operational
There will be times when manual migration or automated replication is needed between any of the 4 nodes, regardless of whether it's a single 4 node cluster or two 2 node clusters. I am also looking at running a single Proxmox backup server and have yet to determine if one backup server with a single ZFS pool can be used for more than one cluster.
 
Last edited:
I found this thread which is helpful in case I decide to do two clusters... I see the command is available by the qm help output.

Code:
USAGE: qm remote-migrate <vmid> [<target-vmid>] <target-endpoint> --target-bridge <string> --target-storage <string> [OPTIONS]

  Migrate virtual machine to a remote cluster. Creates a new migration
  task. EXPERIMENTAL feature!
 
  • Like
Reactions: pvps1
I would opt for two two-node clusters with a qdevice each, no special corosync configuration otherwise. you can use backup/restore or the remote-migration preview for transferring guests from one location to another. a single PBS can be shared (given your configuration, it would make most sense to have a single datastore with at least two namespaces, one NS for each location, but two ZFS datasets with a datastore each would also work if you absolutely need separate quotas and want to take the deduplication hit for it ;))
 
I would opt for two two-node clusters with a qdevice each, no special corosync configuration otherwise. you can use backup/restore or the remote-migration preview for transferring guests from one location to another. a single PBS can be shared (given your configuration, it would make most sense to have a single datastore with at least two namespaces, one NS for each location, but two ZFS datasets with a datastore each would also work if you absolutely need separate quotas and want to take the deduplication hit for it ;))
Thank you!

Further looking at my design, I might even be able to manage with 4 nodes and Qdevice (if that makes sense). The Qdevice brings node count to 5 and quorum to 3. I can lose any 2 nodes which will probably be acceptable.
 
Last edited:
you can do that, but be aware that corosync is really latency sensitive, so it really depends on the link between your locations. if the only upsides are the common view and rare migrations (with local storage!), I'd rather do two clusters (a common view for multiple clusters is in the works, and that will include migration from one cluster to another, building on the existing experimental remote_migrate feature).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!