3-node Cluster with independent nodes

superwinni2

Well-Known Member
Oct 21, 2019
37
2
48
Germany
Hello

I would like to create a 3 node cluster but all nodes need to be independent.
I don't need always all 3 nodes and the important VMs run on node1.
Node2 and node3 are only running if I need them.

I don't need HA but I want to move VMs from node1 to node2 or node3 and vice versa.

I've already a two-node-cluster wich works fine. Now I'm lookin for a way to add a third node.

My Corosync looks like this at the moment:
Code:
root@node1:~# cat /etc/pve/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: node2
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 10.10.1.11
  }
  node {
    name: node1
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 10.10.1.10
  }
}

quorum {
  provider: corosync_votequorum
  two_node: 1
  wait_for_all: 0

}

totem {
  cluster_name: Cluster
  config_version: 2
  interface {
    linknumber: 0
  }
  ip_version: ipv4-6
  secauth: on
  version: 2
}

Can anybody tell me what corosync need to look like when I'm going to add node3 to the cluster?

I'm thinking of something like this:
Code:
root@node1:~# cat /etc/pve/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: node3
    nodeid: 3
    quorum_votes: 1
    ring0_addr: 10.10.1.12
  }
  node {
    name: node2
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 10.10.1.11
  }
  node {
    name: node1
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 10.10.1.10
  }
}

quorum {
  provider: corosync_votequorum
  expected_votes: 1
  wait_for_all: 0
}

totem {
  cluster_name: Cluster
  config_version: 2
  interface {
    linknumber: 0
  }
  ip_version: ipv4-6
  secauth: on
  version: 2
}

Or is there any better solution?

Thanks and Greetings
 
Hello, hello,

I run this setup for long time. I have a 3 node cluster and most of the time only one node is up. The other nodes I start based on needed load or for periodicall ZFS replication to happen.

The main idea is to give more votes to the main node:

Code:
    nodeid: 1
    quorum_votes: 4
   
    nodeid: 2
    quorum_votes: 1

    nodeid: 3
    quorum_votes: 1

Don't forget, when you do changes to corosync have all nodes up and increment the config_version: number in order to force the update on all nodes.

Important! this setup does not offer you redundancy, is OK for non production environments only.

Regards,
Rares
 
Hi
Thanks for the info!

And what would happen if only node2 or only node3 (or both) are up?
(For example node1 crashes?)

If I understand everything the right way node2 and node3 are still in a too less quorum unless both are up?
 
If I understand everything the right way node2 and node3 are still in a too less quorum unless both are up?

This is correct. The question is why did this happen. If you just want to power down the node for maintenance then migrate the VMs and edit corosync giving more votes to the other node.

If it's because of failure then you can can run `pvecm expected 1` and start recovering the VMs from snapshots.

Rares
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!