Cold Standby


Renowned Member
Jan 15, 2016
Hi all,

Firstly, a little background so you know where I am:

I used to run two physical CentOS servers at home as active/standby for a few personal projects. The active server ran mail, ownCloud, databases etc etc, and each night the active server sent a WoL to the standby and rsync'd everything across then shut it back down afterwards. The theory was that if the active server had a hardware failure, I could turn on the standby server, change a few firewall rules at the gateway, and be up and running again.

The problem was that putting everything on one physical was getting hard to manage and if I ever wanted to play with a new config or software I was basically using prod as dev, so not ideal. Enter Proxmox.

I turned the standby server into a Proxmox node (node01), bought a third server and chucked a couple of ZFS mirrored disks in it and shared the mounts over NFS. Gradually I moved all services into their own little VMs/Containers. It works wonderfully (thanks Proxmox devs) so I then turned the previously active server into a spare Proxmox node (node02) and turned them both into a two node cluster so that if either failed I had a spare, just like it used to be when they were bare metal.

Each server has enough steam to run all VMs, so I was planning on having one node as a cold standby so that it is isn't contributing to my already ugly power bill. Plus saving polar bears, or something.

With one server offline, if I try to make a change to anything it fails because there's no quorum. This isn't a big deal - I'll be making changes so infrequently that I don't mind having both nodes online when that happens. But here's the rub: backups seem to act like a change because of the locking mechanism. I backup the VM images each night to the node's local storage in case the NFS server has some catastrophic power supply failure and takes all the disks with it.

Ive done a lot of searching around the forum for a solution that doesn't basically involve removing the spare node from the pool but I haven't found anything. I would like to keep it part of the pool, if possible, because it would contain all the VM config, firewall rules for the containers, mounts etc etc that would just be a nuisance to reconfigure if I ever need to do a failover due to hardware failure or whatever.

I'm running 4.1, fully patched. What options do I have?

Thanks for reading,
how about not running a cluster?

we use pve-zsync and vzdump to backup vm's and obnam and rsnapshot to backup data. depending on the data we backup from every 5 minutes to daily. vzdumps are just weekly.

another backup method is to use turnkey lxc templates that are available in proxmox. turnkey backups make it very easy to restore a lxc to many different types of virtualization . so you are not locked in to a particular host or vm type.

I run a lxc for each business and home theater server . data is on a zfs shared to lxc systems.

there are multiple right ways to so the backups. I think zfs based backups are best for data between hosts.
Thanks for the reply. As I mentioned it would be useful if I could keep the machines clustered to keep a mirror of the settings on both hosts. The issue is not really the type of backup, it's the fact that backups are locked when I try to run a backup.

Is there a way to retain settings across hosts without clustering? Is there a way to export Proxmox settings to a config file like you can in FreeNAS which would mean that clustering wasn't necessary for this setup as I could just quickly restore the config to a new node instead?
Would the information here (Re-installing a cluster node) be suitable to make a continuous backup of node01 and restore this to a new node (node02) should node01 fail?

If so, what are the implications of node01 and node02 having differing hardware (different CPU, RAM totals, network interfaces (physical MAC addresses))?
Would the information here (Re-installing a cluster node) be suitable to make a continuous backup of node01 and restore this to a new node (node02) should node01 fail?

If so, what are the implications of node01 and node02 having differing hardware (different CPU, RAM totals, network interfaces (physical MAC addresses))?
for the NICs adapt /etc/usev/rules/70-persistent-net.rules to fit with your /etc/network/interfaces. That's all,

Different CPU/RAM should not be an real problem - except you have much to less RAM or you defined CPU-args, which the new one no have.
If you use the default kvm64 all will be fine.

Hi Ben,

I'm just in the process of building something similar as you did. I'm unexperienced with Proxmox, so everything below without guarantee:

To my understanding it should be possible to keep the 2 hosts clustered even if one of them is offline most of the time. Typically the folder /etc/pve will be changed to be read-only if 1 of the two hosts goes offline, because the reminding host is not sufficient to achieve more than 50% of the votes. Hence you can't make any changes to the cluster anymore.
To address the quorum issue, you can configure the cluster to expect only 1 vote in a two-node cluster by following this guide ( ). After this change the cluster should behave normally (/etc/pve should remain writable) and re-sync after the cold-standby-host becomes online again.

Unfortunately the guide is in German, but essentially it says:

You can configure the cluster to accept a quorum of 1 vote in two ways:
1) Enter the following in a shell:
pvecm expected 1
2) Add the following line to

<cman two_node="1" expected_votes="1" [...other existng config here ...] > </cman>

Hope this helps. Would be happy to get some feedback, whether works for you, because I'm currently planning to try this approach as well.



The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!