F
fvdpol
Guest
Last weekend I upgrade my home Proxmox VE cluster from 1.9 to 2.1.
The cluster essentially consists of 2 nodes; of which one is running (which used to be the master node in VE 1.9), and the second machine only running when needed for specific jobs (main reason to keep the energy bill and noise-level down
* Machine Proxmox1 -- running 24/7, running most of my CTs and VMs
* Machine Proxmox2 -- typically running only in the weekend, running backup jobs to disk & tape; and doubles as backup for the 1st node.
Reason for having the two nodes in one cluster is that in case of scheduled outages on the Proxmox1 node I can simply migrate the needed CTs and VMs to the 2nd node. Hurray for the virtualisation since this allow me to minimize the downtime for the services in our home.
After migration to VE 2.1 I noticed that this 2-machine setup (of which 1 part-time operational, hence "1.5 Node Cluster") does not work for me as expected:
When the Proxmox2 node is down I cannot make any changes like create new CT on the Proxmox1 node since "it cannot get a quorum"...
After searching & reading on the forum I understand that this is a logical consequence from the cluster setup; since there is no designated "master" node anymore the cluster has to figure out itself which nodes are still in the cluster, and which are not.
The recommended solution would be to add a 3rd cluster member to ensure that the remaining nodes still have sufficient quorum. Since my main driver for shutting down the 2nd node would be save energy it doesn't make sense for me to even add a 3rd node.
As quick & very dirty workaround I created a 3rd proxmox cluster node as VM, running on the (main) Proxmox1 node.
* Machine Proxmox1
* Machine Proxmox2
* Machine Proxmox1V (VM running on Proxmox1)
This way I can switch off the Proxmox2 machine and still manage my Proxmox cluster in the way I used to.
Although this ugly hack get's the job done; I really don't like the setup where I'm running a 'fake' proxmox cluster member as VM to keep the system happy.
What would be the most elegant solution to get the "Proxmox VE 1.x master node behaviour" back in my situation?
Instead of the fake node I'd rather manually designate one of the nodes as Cluster Master and bypass/override the whole quorum system.
Thanks,
Frank.
The cluster essentially consists of 2 nodes; of which one is running (which used to be the master node in VE 1.9), and the second machine only running when needed for specific jobs (main reason to keep the energy bill and noise-level down
* Machine Proxmox1 -- running 24/7, running most of my CTs and VMs
* Machine Proxmox2 -- typically running only in the weekend, running backup jobs to disk & tape; and doubles as backup for the 1st node.
Reason for having the two nodes in one cluster is that in case of scheduled outages on the Proxmox1 node I can simply migrate the needed CTs and VMs to the 2nd node. Hurray for the virtualisation since this allow me to minimize the downtime for the services in our home.
After migration to VE 2.1 I noticed that this 2-machine setup (of which 1 part-time operational, hence "1.5 Node Cluster") does not work for me as expected:
When the Proxmox2 node is down I cannot make any changes like create new CT on the Proxmox1 node since "it cannot get a quorum"...
After searching & reading on the forum I understand that this is a logical consequence from the cluster setup; since there is no designated "master" node anymore the cluster has to figure out itself which nodes are still in the cluster, and which are not.
The recommended solution would be to add a 3rd cluster member to ensure that the remaining nodes still have sufficient quorum. Since my main driver for shutting down the 2nd node would be save energy it doesn't make sense for me to even add a 3rd node.
As quick & very dirty workaround I created a 3rd proxmox cluster node as VM, running on the (main) Proxmox1 node.
* Machine Proxmox1
* Machine Proxmox2
* Machine Proxmox1V (VM running on Proxmox1)
This way I can switch off the Proxmox2 machine and still manage my Proxmox cluster in the way I used to.
Although this ugly hack get's the job done; I really don't like the setup where I'm running a 'fake' proxmox cluster member as VM to keep the system happy.
What would be the most elegant solution to get the "Proxmox VE 1.x master node behaviour" back in my situation?
Instead of the fake node I'd rather manually designate one of the nodes as Cluster Master and bypass/override the whole quorum system.
Thanks,
Frank.