Live Snapshot and simple incremental backup idea on 3.3

tcit

New Member
Jul 28, 2012
4
0
1
www.totalcareit.net
I have a quick question that I haven't been able to find directly answered in the forums yet or in the kvm or qemu documentation. Hopefully it's just a quick reply for you guys, and a chance for me to help the community out.

So here's 2 questions:

  1. If I issue a live snapshot (aka. NOT a qcow snapshot) on a KVM instance such as we see in line 878 of /usr/share/perl5/PVE/VZDump.pm or by running qm snapshot, is the base VM hard drive file now static at this point in time?
  2. If yes to 1, where are the differentials being stored now? As I understand it, PVE 3.3 makes a temporary qcow regardless of backing storage somewhere, but where? And is it still different for LVM vs raw vs qcow?

If you're wondering, here's what I am hoping to accomplish is this:


  1. Have a Proxmox server set up to store and quickly boot incremental backups if needed. Let's call this BACKUP-SERVER.
  2. Have a Proxmox server with production VMs that need nightly incremental and off-site backups. Let's call this PRODUCTION-SERVER.
  3. BACKUP-SERVER exports /var/lib/vz as an NFS share to allow reception of backup data.
  4. -------Now for the actual backup concept:
  5. For every VM I want to back up, I create an identical VM on BACKUP-SERVER with initial qcow2 hard drives of identical size.
  6. For the source: Each night the source VM on PRODUCTION-SERVER issues a live snapshot, similar to what the existing vzdump script already does. If it is not a .raw or LVM, use qemu-nbd to mount it as a raw block device.
  7. For the target: BACKUP-SERVER issues a new qcow2 snapshot on the target VM and mounts the new VM incremental as a raw block device using qemu-nbd.
  8. The backup script then calls something like virtsync to do a sparse, incremental-only comparison sync between the raw source and target devices. As virtsync updates only the modified blocks, the new target qcow snapshot should only be growing with the new incremental writes at the same time.
  9. After virtsync finishes "syncing" the modified blocks to the target qcow snapshot, we delete the live snapshot on the source vm, but leave the qcow snapshot on the target side.
  10. At this point, on BACKUP-SERVER I now have a qcow with a base snapshot, and a new incremental. If I was rsyncing this offsite and had already seeded the base, only my incremental file would have to be uploaded.
  11. In a failure, I could also boot my "backup" without having to wait for a qm restore. I can even issue another snapshot prior to booting so my original backup remains unmodified!
  12. Every 90 days, on BACKUP-SERVER, it deletes the oldest incremental to keep a rolling 90-day window.

So we hopefully get fast, sparse, incremental, instantly-bootable, offsite syncable backups using all existing tools! Looking forward to working on this once I can resolve the live snapshot part.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!