cold standby solutions?

hkais

Member
Feb 3, 2020
19
1
23
44
I am wondering how I could solve a cold standby solution for proxmox?

In vmware I did a snapshot, rsynced the VM to a new storage on a different server, and the system was in a defined snapshot state. Depending on how much hours of work could be lost, this procedure was repeated regularly.
Basically it was a bit less as a real backup, it had only the goal to be quickly online, if the corresponding node has failed.

How to achieve this also with proxmox?

I am just trying to understand the best practices with proxmox over all possible features for real business environments/datacenters.

Ideally I would like to have:
- VMs which HA with "nearly" zero downtime, which are automatically switched from node to node, but what Filesystem to choose? ceph, zfs, ...?
- VMs which are quickly available again with manual steps (e.g. admin requires to start the above named rsynced VMs on different node)
- VMs which need to be reconstructed from backups (e.g. test VMs or similar totally uncritical VMs)

Are there any good readings about this?
Any good best practices for this?

We used SANs in the past, but want to get rid of them.
 
thank you, just wondering if the zfs setup has to be done in a specific way?
e.g. do the nodes have to be identical from the storage configuration?
or is it possible to be non symmetric like having a very fast NVMe based SSD nodes (thus typically small disks) and having e.g. 1 big spindle based storage for "cold standby"?
 
thank you, just wondering if the zfs setup has to be done in a specific way?
e.g. do the nodes have to be identical from the storage configuration?
or is it possible to be non symmetric like having a very fast NVMe based SSD nodes (thus typically small disks) and having e.g. 1 big spindle based storage for "cold standby"?
You can do whatever you want.
 
thank you, just wondering if the zfs setup has to be done in a specific way?
No. Recommended is to use mirrored (RAID10) vdevs. (I would never drop redundancy.)
e.g. do the nodes have to be identical from the storage configuration?
The storage pool for the VMs needs to have the same name on all cluster members. This is required to get replication working flawlessly. The underlying topology (mirrored, RaidZ1,...) may be different = does not matter in this regard.
or is it possible to be non symmetric like having a very fast NVMe based SSD nodes (thus typically small disks) and having e.g. 1 big spindle based storage for "cold standby"?
Yes.

Best regards
 
I am just wondering about this details if they can be found in a documentation? If so, where have I missed it?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!