1.5 Node Cluster -- How to get proxmox VE 1.x master node behaviour?

F

fvdpol

Guest
Last weekend I upgrade my home Proxmox VE cluster from 1.9 to 2.1.

The cluster essentially consists of 2 nodes; of which one is running (which used to be the master node in VE 1.9), and the second machine only running when needed for specific jobs (main reason to keep the energy bill and noise-level down :)


* Machine Proxmox1 -- running 24/7, running most of my CTs and VMs
* Machine Proxmox2 -- typically running only in the weekend, running backup jobs to disk & tape; and doubles as backup for the 1st node.

Reason for having the two nodes in one cluster is that in case of scheduled outages on the Proxmox1 node I can simply migrate the needed CTs and VMs to the 2nd node. Hurray for the virtualisation since this allow me to minimize the downtime for the services in our home.

After migration to VE 2.1 I noticed that this 2-machine setup (of which 1 part-time operational, hence "1.5 Node Cluster") does not work for me as expected:

When the Proxmox2 node is down I cannot make any changes like create new CT on the Proxmox1 node since "it cannot get a quorum"...
After searching & reading on the forum I understand that this is a logical consequence from the cluster setup; since there is no designated "master" node anymore the cluster has to figure out itself which nodes are still in the cluster, and which are not.

The recommended solution would be to add a 3rd cluster member to ensure that the remaining nodes still have sufficient quorum. Since my main driver for shutting down the 2nd node would be save energy it doesn't make sense for me to even add a 3rd node.
As quick & very dirty workaround I created a 3rd proxmox cluster node as VM, running on the (main) Proxmox1 node.

* Machine Proxmox1
* Machine Proxmox2
* Machine Proxmox1V (VM running on Proxmox1)

This way I can switch off the Proxmox2 machine and still manage my Proxmox cluster in the way I used to.

Although this ugly hack get's the job done; I really don't like the setup where I'm running a 'fake' proxmox cluster member as VM to keep the system happy.

What would be the most elegant solution to get the "Proxmox VE 1.x master node behaviour" back in my situation?
Instead of the fake node I'd rather manually designate one of the nodes as Cluster Master and bypass/override the whole quorum system.

Thanks,
Frank.
 
After shutting down the second node, you can run

# pvecm expected 1

on the other node, That way you get quorum back.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!