scheduled incremental restore from PBS

Mar 15, 2018
13
7
43
Norway
Hello

I am trying to create a DR setup with a second proxmox cluster. the idea is that all vm's from primary cluster is regularly replicated to this secondary cluster, and sit's ready to boot on demand. The clusters have shared lvm storage on multipath Fiberchannel SAN.

the first proxmox cluster takes backup to a remove PBS server. but live restore of all vm's at the same time is not scalable on our setup. there are just to many vm's

Is it possible to do a scheduled restore from PBS to the secondary proxmox cluster, ie after the backup have run ?

Preferably i would like to restore a vm, and if successfull, delete the previous replica of the vm. or even better, just restore the disks, into the same vm as unused, and swap out the disks, and remove the previous run's disks. Doing a full restore daily, do take a lot of time. but it is a working solution.


But since the proxmox backup server knows what blocks have changed from one backup to the other. is it possible to apply just the latest backup onto the replica vm on the secondary proxmox cluster ? kind of an reverse incremental restore. ? perhaps a hook in the verify process, could both read, verify and apply the restore to the replicated vm. would make the replica vm's disk up to date, without having to do a full write of the whole disk.

These are just my ideas, I also welcome any other suggestions for how to do regular replications between clusters.

best regards
Ronny Aasen
 
technically such an incremental restore (or rather, applying the diff between two snapshots) would be possible (and rather trivial for disk/block-based backups).

the main problem is who provides the guarantee that the on-disk state actually matches the previously restored snapshot? if something/somebody modifies it (and this could be done by accident, e.g. mounting for a disk image, or scanning for a directory/file-based volume, or a partial restore that got interrupted, or ..), applying a diff on top would potentially corrupt the state on-disk (and this corruption would then be propagated basically forever). of course, we could compare the on-disk state to the one in the previous snapshot (which entails a full read of the on-disk data), so that we can then skip some (hopefully the bulk) of the writes.

that being said - the information is all there, especially for disk/block-based backups, so a POC could be easily scripted (for example, using "proxmox-backup-debug" to parse the index and give you the list of chunks, and "proxmox-backup-client map" to make the newer snapshot's image accessible for copying selected chunks, e.g. with `dd`).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!