Shrinking cluster size overnight

crembz

Member
May 8, 2023
41
4
8
I've run pve for over a year and have become quite familiar with it. I do feel that the quorum and corosync setup is rather rigid though.

I'm rebuilding my home network and lab at the moment hoping to reduce power usage.

I have a total of 7 nodes spun up during the day, and want to shut all but one off during the night. The larger power drawn is by far the single threadripper based virtual nas node. Of the remainder, there are two larger boxes and the rest are usff machines.

The one node I want to keep on overnight hosts my firewall, vpn, network controller and dns.

Is there any way that I can have 6 nodes shutdown and keep the firewall node running? Manually running pvecm expected = 1 is not an option.
 
PVE is not designed for that and it sounds like the whole cluster won't function when that single node is down. Maybe give it 8 votes (instead of the usual 1), so it will always have the majority? Or maybe remove that node from the cluster and make it a separate (newly installed) PVE?
 
So in the case of changing the number of votes for that node, my understanding is you still need ½+1 votes for quorum.

I'm my case of 7 nodes that one node would need 4 votes. But if ever it goes down wouldn't the entire cluster come down due to a failed quorum?
In my case, "master node " hosted the vm disks ( exposed via NFS) so without it, cluster would be down anyway.
But yes, this setup is not very compatible with the spirit of multi master qorum
 
  • Like
Reactions: crembz
In my case, "master node " hosted the vm disks ( exposed via NFS) so without it, cluster would be down anyway.
But yes, this setup is not very compatible with the spirit of multi master qorum
Hrm ... Yeah I have the same dependency on the Nas node, but I was planning to use local zfs replication for those critical vms I want to run in the one node (cluster shutdown) state.

I tried xcngp and although the clustering seems more fluid and flexible, it does seem like it's got a hard dependency on the Nas as the 'witness' which makes it a spof for the entire cluster. The benefit though is xo can manage multiple pools (clusters) and cross migrate if I kept the always on node seperate.

If I just leave the small node standalone in pve I have two management domains with no ability to migrate for maintenance other than a backup/restore.
 
Hrm ... Yeah I have the same dependency on the Nas node, but I was planning to use local zfs replication for those critical vms I want to run in the one node (cluster shutdown) state.

I tried xcngp and although the clustering seems more fluid and flexible, it does seem like it's got a hard dependency on the Nas as the 'witness' which makes it a spof for the entire cluster. The benefit though is xo can manage multiple pools (clusters) and cross migrate if I kept the always on node seperate.

If I just leave the small node standalone in pve I have two management domains with no ability to migrate for maintenance other than a backup/restore.
there is offline cros cluster migration available via cli in case...qm remote-migrate
 
I tried looking it up, couldn't find any official documentation though. Have you got a link?
 
I was thinking, would it be possible to run multiple q devices? If I ran enough of them would it not let the cluster shrink down to one node?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!